Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Releasing Locked records

Status
Not open for further replies.

DerFarm

Programmer
Mar 10, 2003
25
I'm writing an application that requires a large number of updates. I'm getting the error message that I have exceeded the MaxLocksPerFile Registry entry.

I really don't want to mess with registry entries in programs that will be used by people who have trouble turning on the machine in the morning.

Is there any way to release a lock on a record after the update is completed?

Thanx

 
try setting the property value through the connection object, prior to establishing it, using "Jet OLEDB:Max Locks Per File
 
For extensive updating open the database exclusive and use optimistic or batch locking.
 

I think we need to first clarify if the OP is talking about bulk updates/deletes using a bulk update sql statement with the connection's .Execute method (DELETE FROM MyTable), or updates in a recordset loop, all within a single transaction, and if the latter, what is the original pupose of updating the batch within a transaction in the first place, when they are willing to change to single record updating outside of a batch transaction.

Though, in either case, batch locking with-in a transaction seems to be the cause of the issue (the transaction causes the locks), and optimistic locking on just individual record updates doesn't enforce the batch data update integrity, if needed, and a need to open the db exclusivley for this seems to be some what of fast-fix-solution when maybe we are talking about only several tens-of-thousands of records, where the error would occur.

If this is so, then I do not see what dis-advantage there would be to simply change the MaxLocksPerFile for a particular connection, or to create a simple registry key for just your application's Jet settings and using the connection properties, route Jet to use this key rather than it's default engine keys (both soulutions do not affect Jet settings which other applications use, as would be the cause it you changed the default Jet engine key values).
If not so, (batch update integrity is not needed) and we are talking about bulk updates (DELETE FROM...) then you can also turn off the use of a transaction on these bulk updates though yet another Jet connection property, and the high number of locks shouldn't be needed at all.
 
The actions are updates to a single record in a recordset. I realize that the max number of locks is huge, but, IN THEORY, the lock max COULD be reached, or worse, changed by another application between iterations.

I'm pretty much a purist when it comes to writing code (evidence of a mis-spent youth), and when I tell the client the program will not exit abnormally due to violating system state, then I have to guarrantee it.

Accordingly, I am keeping track of the locks. At system 99% of Maxlocks (9400 by default), I close and re-open the recordset. This application is not time sensitive and I'm willing to take the time hit. More importantly, I can prove to the client that it will work in theory and practice as advertised. And keeping this client happy is important to me.
 

>or worse, changed by another application between iterations.

I do not think that would affect the already opened connections.


It sounds like you have a adLockBatchOptimistic LockType on a Static cursor, and are you looping through many many records updating them locally with the .Update method, and then, on completion, calling the .UpdateBatch?

That would place a transaction on the whole batch, and therefore the need for the larger amount of locks when the db is opened as shared.

If this is the case (and that would also cause your problem), then I guess you have basically three/four choices as mentioned by dilettante and myself:

1. Update back to the db table each record on each loop. With in each loop, call the .UpdateBatch instead of .Update. Or use a server side cursor and change the lock type to use a adOptimistic, in which case UpdateBatch isn't needed, as the Update method with a server side cursor causes the data in the table to be updated immediately, (as it does with UpdateBatch and a client side cursor). (Both of these lack the Batch update integrity, which would otherwise assure that all records are updated or the complete batch fails).

2. Increase the Locks and continue to use BatchUpdating (assures that all records update, or fail)

3. Do not open the db as shared.

 
Well, assuming that SB is correct (and it looks like a very good guess), there's also

4. Keep a counter of the updated records, and when you have a certain number, say a few records less than the maximum allowable locks, call updatebatch. You'd use SB's option 1, except within each loop, check the counter against the maxlocks value and call updatebatch when you're about to get too many.
 

However, a single update doesn't necessarily cause one single lock.
 
Well, that's true, too, now that you mention it. It would be one lock per record involved, right?
 
> It would be one lock per record involved, right?

With record locking, generally yes (read further down), as far as a "per record" issue, but a "record" in a recordset may not always mean one row in a single table in the database (Joins). So the update may lock two or more records in different tables, causing one lock per record involved as you stated, but maby two or more locks for the update because more than one row in different tables may be involed in the single recordset update.

And concerning a JET Mdb, it depends on the type of default locking - Page (default) or Record locking.
If Page locking is used, then one Page could consist of more than one row including partial rows, causing possibly less total locks than the number of rows being updated. Therefore, you could do a bulk deletion or update on 30,000 rows/records, but only need less than the default max locks allowed of 9,500.

On the reverse side, one record could span accross more than one page, causing more than one lock for a single update. Therefore, you could do a bulk deletion or update on 30,000 rows/records, but need more than 30,000 locks.

As far as how many rows and partial rows are involved in a single page, or how many pages or partial pages a row consists of, depends on the size of the data and metadata: How many columns/fields there are, their sizes and the data contained in them (such as Long Binary fields).

>4. Keep a counter of the updated records, and when you have a certain number, say a few records less than the maximum allowable locks, call updatebatch. You'd use SB's option 1, except within each loop, check the counter against the maxlocks value and call updatebatch when you're about to get too many

Yes you could do this (use the counter and if the max isn't reached, then call just .Update, and if reached, then call .UpdateBatch).
But in order to do this properly, you need to know the max lcoks setting, meaning first querying the registry value for MaxLocksPerFile, and you need to know approxmately how many rows are affected by each update, and if page locking is enabled, how many pages the row uses.

Please be aware though, that the locks Jet would need for an update, are Lock-requests. Jet doesn't actually place any locks on a file. It requests the OS to do so. This means that even though you can raise the max locks per file (mdb in this case) to a very high number, the amount of locks which can actually be placed depends on available resources, and the max allowed by the OS/Server. On Novell servers for instance, there is a max of 10,000 locks, slightly above the Jet default of 9,500. I think! in later versions of Jet, when the max locks needed are higher that the server allows, but the number set in Jet is, Jet will split the Updates into two or more transactions. This assures the update isn't halted because of a lock limitation, however, it sure does assure the batch update integrity as may be required (first back succeeds and second batch fails due to some other error)
 
>it sure does assure the batch update integrity

That was supposed to read:

"...it sure does Not assure the batch update integrity
 
<but a "record" in a recordset may not always mean one row in a single table in the database (Joins)
Yes, indeed. Once you reminded me of that, I was thinking along the lines you describe (and describe very well, I might add). However, this all begs the question of whether there's any way of telling how many locks have been applied at a given time, so you don't have to take into account all of the factors that influence the number of locks. Isn't there perhaps a TotalLocks property or something like that somewhere, which would obviously greatly simplify this issue?
 
hmmmmmmmmm....so if I use adOptimistic and call .BatchUpdate this will release the records?

That doesn't sound right. I'm not saying it ISN'T right, just doesnt' feel like what MS likes to do. My experience is that the release comes when the table/query/recordset is released. I've run into this in some timing issues when dealing with macros.

Your point about the joined records is a good one. Fortunately for me, I'm dealing with a single table. I'm going to try to find something in the MS data issues database.

Thanx guys



 
no, for that, use either:
1. adLockOptimistic with a server side cursor and .Update for each update iside the loop
or
2. adLockBatchOptimistic with a client side cursor, and UpdateBatch for each update inside the loop, instead of .Update inside of the loop and .UpdateBach outside of the loop.
 

BobRodes, I am not sure because I never looked into that. I know you can find out how many locks are in place, but do not know if there is a method to determine how many lokes will be placed, nor do I know if there is and external method call in the line of: "If the maxlocks are reached, then flush the transaction and start a new one (internally it can happen under the one condition I mentioned with Novell, and also, (since JET 3.5), with DDL operations.)

If you know your stuff good, you could probably attempt a good estimate, especially if record locking instead of page locking is in place. Then, under JET Mdb, there may be other factors to consider, such as additional locks for index pages and locks because of the use of the relations model, but this may have changed since the older days of Jet (I never bothered to look into it again).

 
<My experience is that the release comes when the table/query/recordset is released.

A disconnected recordset actually releases all server resources at the time that it's disconnected. You can make whatever changes you like locally, then reopen the connection and batch update the records. You can filter out records that haven't been altered, and also apply a filter for records that have been changed both by you and at the server since you created the recordset, so you can do conflict resolution. This article contains some useful information:
Bob
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top