Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Persistent recordsets, speed, and the ldb file

Status
Not open for further replies.

downwitchyobadself

Programmer
Aug 1, 2001
159
US
Hi all.

I've had some ongoing problems with a file-server app's speed across a network; I recently put a new version into beta testing which follows the very good advice offered at on opening a persistent recordset on app open, and closing it on app close, to maintain a single connection to the ldb file.

Unfortunately, the persistent recordset and single connection are not working; the lock files seem to be increasing faster than in the previous version, in fact. Watching the lock files on the server, the number of locks jumps quickly, and as soon as one user hits ~50 locks, everyone slows down. And in general the only way to make the number of lock files drop is to close Access.

It's extremely frustrating--the app is code-heavy (Access 2000, uses a lot of DAO recordsets), and though it flies when run locally, the constant reconnecting and rewriting attempts on the ldb are ruining its speed over the network; the longer the app runs on a given day, the slower it runs.

Anyone have any experience with this? Any ideas why my basic persistent recordset and single db connection would not be working?

Thanks in advance.
 
hmmmmmmmmmmmmm,

I'm at the lower end of understanding of networking and its impact on processes, so my info should be taken with the whole block of salt.

Ms A. is not a client-server app, so ALL processing takes place on each local machine, regardless of the placement of the various piece parts of the app. While there is some advantage to splitting the db into the two (or more) pieces (FE / BE / ...), the major impact is on the app load, not in performance, as ALL data necessary for a transaction is loaded across the net for each operation. From 'expirimental' results, it appears that Ms. A. DOES have a reasonable 'cache' process, so at least some info is re-read from cache for some transactions, but transactions are still subject to re-freshing if information necessary has been accessed.

Most networks are set up to handle the more normal (or common) activities, which are file sharing for SMALL information packages, such as e-mail sending/retrievial, saving memos and spreadsheets to a file server, ...

Setting up a network to handle the relatively large transactions of database operations is somewhere between uncommon, rare and (in my limited experience) non-existant. Per the intro, this is NOT my area of 'expertise', so I cannnot advise on the who, what, when, why, or where these adjustmanets need to be made, hpowever I have been assured that there is a difference - and the impact is HUGE.

One common 'soloution' to the network crawl is to switch from the standard .MDB back-end to the MSDE (a.k.a. mini sql-server), where you can include many/most of the queries. This improves the overall performance, as the MSDE processing returns only the results, not the entire underlying recordsets necessary to construct the results. This is, however, not without some cost in that you really NEED to become proficient in the SQL language syntax, as the convienient query grid thing is not in SQL server, and MSDE does not include the handy tool set which come with the full version of SQL Server.

Of course, all of the above assumes that MS. A. is not the 'real' culprit, but I hav used Ms. A. with more than your 50 simultaneous users with little or no throughput issues, so I do not think it is Ms. A. and or the locking mechanics, but the overall network arrangement and / or other issues in your app design.



MichaelRed
m.red@att.net

Searching for employment in all the wrong places
 
Well, for what it's worth, I've never had any problems before I hit this particular network. And, equally for what it's worth, there is about 1/10 the amount of transactions in this app of what I've ever seen, in any MS Access app, anywhere--the entire thing is handled locally, through OOP/custom class modules, basically imitating a VB or C app, and only at key moments are small amounts of data sent or retrieved.

The single exception to this rule is queries under combos, and in most cases, there's just not that much I can do about it. But as the requeries are limited, this should really not slow things down; the load is mostly on the client machine's memory, as it has to maintain lots and lots of recordsets and variables at one time.

I'm looking more for specific details about the writing and rewriting of these lock files, mainly because I can see degradation in the app performance between first load and shutdown, and especially when all users log off and log back in, thus erasing all connections to the back end, and starting this lock-file writing process over again. While I can't ever be certain that the problem is there, I feel fairly sure--after 15 months of careful redesign--that the problem is not in the app's programming, and the tips on Tony's website are the best, and most unique, regarding A2K performance hell that I've seen anywhere, so I tend to trust him. I just don't know why what he proposes wouldn't work in my case.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top