Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Preventing blocking locks for (required) long running transactions

Status
Not open for further replies.

regilobo

Programmer
Aug 18, 2003
85
US
SQL Server 2000 - using ODBC to connect.
I have autocommit turned off and enabled implicit transaction support

The gist of it is that I have to maintain database transactions while waiting for a remote client to respond. This can take up to 30 seconds from initial connect to response. Actual interaction with the db takes about 650ms. I commit the tx on ACK, rollback if the client disconnects or NAKs.

This works great for a single client. There are a few table updates and inserts that occur with each client connection and they are all viewed as a single transaction to be committed or rolled back depending on if the client actually processes the response.

But when a second client connects, the first has a lock on the relevant tables and they end up in this weird tug of war and neither will give up. SQL Server doesn't break the lock and it appears from my reading that it's a particular situation that can occur with distibuted client/server deadlocks.

So, my dilemma is how to maintain the atomicity of the transaction over a 30 second timespan and still allow concurrent access to the db.

Suggestions? Thanks..
 
regilobo,

Are you certain you are not reaching a Deadlock situation here?

If so, and in any case, you may want to use the SET LOCK_TIMEOUT function to cause the secong thread to timeout quickly.

I wasn't aware of the stated Client/Server deadlock. Where is the reference to this?

Thanks,

Logicalman
 
Reference bol: Understanding and Avoiding Blocking

The suggestion is to use a query time-out, or a lock time-out using SET LOCK_TIMEOUT, or a bound connection.

I'm investigating these, and may end up setting a lock timeout and waiting until the lock is free. Not sure that I like that, bec/ it could cause the additional clients to wait an additional 25 seconds or so until the 1st client disconnects.

Another thing that I'm considering is switching to ADO and using disconnected recordsets. I'm just not sure that this will work the way I'm thinking, because I don't know what will happen once the disconnected recordsets are reconnected and updated.
 
regilobo,

From re-reading your initial question, it comes to mind that if the remote DBs are causing updates to the data, and this appears to be a Subscription PULL type set up, then you may be better off simply storing the data pulled from the remotes, and then run all the transactions once the data is received, rather than setting locks on tables that cannot be updated until the locks are released.

I would need to know exactly what type of transactions are occurring during the connections to determine the best course of action to suggest.

Logicalman
 
LogicalManUS,

Thanks for the responses. Let me elaborate on the setup now:
The clients are dumb credit-card like terminals that dial in to a communications host. That host then contacts my application on the localmachine. The application parses the request out and fetches data or modifies records and so on via ODBC over TCP/IP. The db transaction is started with the first Insert or Update on the database.
The app then returns the data to the comm host, which relays it back to the terminal. Once the terminal has receieved the data, it ACKS the comm host, which relays that to my app and I know that the terminal and customer have received their "receipt". The app then can commit the transaction or rollback if an ACK wasn't received.

The simplified db interaction goes something like this for most tx's:
SELECT member from members table
INSERT transaction INTO transaction table
UPDATE balance table SET balance for member

There are cases where there are a couple more inserts depending on if it is a new member and so forth, but generally that's all that needs to happen.

I'm thinking that I'm going to end up having to wait to post the transaction and update the balance until the client ACKs, but this presents a problem as I need to return some of the data to the client that results from the Inserts and Updates..various id codes etc.

Thanks!
 
regilobo,

Interesting set-up.
My initial reaction was to mention Rowlocking, but, as I am nwo aware, 2000 determines it's own level of locking, so that one's out.

What if you were to use hash tables to emulate the real tables that require updating. For example, a request is made a select is made on the members table, then the lock is released immediately once to ID obtained.
The ID is placed in Hash Table. Using the ID, a lookup is made on the Balance table, and the lock is released once the balance obtained and placed in the hash table.
If the transaction is approved (e.g. sufficient funds), then a new balance is passed back to the client. Once an aknowledgement is received, then the Balance table is updated with the new balance and an entry is made into the Transaction table.
In the above scenario, there are no persistent locks on any of the tables, and therefore no queueing required (unless two processes hit one of the first two tables at the same time - even then the lock is only present for a fraction of a second).
If a transaction number is required, then the next transaction number can obtained from the transaction table and the record partially completed, and fully completed once the ACK is received.
Should the ACK not be received in a reasonable amount of time, the transaction table record is updated with 'failed'.

Just some thoughts.

Logicalman
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top