Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Westi on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Nested queries over the network 1

Status
Not open for further replies.

DbHd

Programmer
Dec 20, 2004
39
I've been an Access developer since A1, but I've not encountered the situation I'm in now. I'm in a new job where they have large tables in an Access backend on the server and pgm on the desktop. We usually have 4-6 concurrent users, up to 14 max.

We have many reports driven by nested queries, some 4 deep. They hit fairly large tables (up to 65K row X 240 columns) that were basically brought in from PC File when Access1 came out. Some bottom queries join 8-10 tables.

Here are my questions:
1. Does Access bring all of each table in the query to the front end?

2. How does Access deal with nested queries? Does row 1 go thru subq1, subq2, subq3, then row 2 go back thru the same process? Repeated for all 65K rows?

3. Would I gain a significant improvement by putting the result set from Q1 into a tmpTbl, then qry on that for Q2, etc?

4. Other than start with a new design (out of the question for this non-profit), what do you recommend I do to keep the sucker going for another 7 years? My current thought is to take one report or form at a time and tune it up.
 
I can't answer questions #1-#4, because (as far as I know) no one has actually published how MS-Access does all this. The closest we can get is to read something like "Introduction to Database Systems" by Chris Date, which gives you the answers for #1-#4 on a very meta, very academic level.

But I'll try to help, because this is a fun/interesting question.


As far as I know, the only thing you can really do to get Access working faster is to manipulate its use of indexes as best as possible. I've made some hack-ish performance tuning changes, like have every subquery reference textboxes on a form (FromDate and ToDate) so that each subquery only pulls a portion of the dataset--this really makes it unmaintainable, but was in my case necessary. The other (more obvious) indexing tweak is to go through your table structure and index whatever fields you're filtering datasets with.



There's nothing scarier than editing the table structure of a (working) complex database with queries and forms and reports, so I know where you're coming from.


I'll leave you with my advice I give everyone: "If it works properly, then it's working GREAT. If it's working properly in Access, then it's working GREAT--but you'd better check to make sure those backups run because you'll probably be needing them soon.
 
dbhd,
Does Access bring all of each table in the query to the front end?
If the entire WHERE clause and JOINS are indexed, then it will only bring the indexes it needs to perform the select, then go back using the internal pointers for the rest of the records. However, here you need the ENTIRE join/where indexed, which isn't often the case. Use JetShowPlan to test this:

Set the following reg keys (For Access 97--for 2000 & up, I think it's Jet\4.0\etc...)
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\3.5\Engines\Debug]
"JETSHOWPLAN"="ON"
Create it if it doesn't exist. Run the queries, look for the file ShowPlan.out. Look at the latest entries ( there's a lot of junk in there from fetching the MSys tables). Adjust indexes based on what you see here. DON'T forget to turn this OFF when done--it'll hog disk i/o for every query).

Anyway, bringing just indexes over is of course much faster than brining the whole table, but still not as efficient as a true dbms server. Otherwise it schleps all tables in question--a real pain.
How does Access deal with nested queries? Does row 1 go thru subq1, subq2, subq3, then row 2 go back thru the same process? Repeated for all 65K rows?
I don't know low-level on this, and even JET Showplan doesn't deal with nested queries, but I would assume it caches locally the results of the innermost one and uses that set for the next outer nest, and so on.

Would I gain a significant improvement by putting the result set from Q1 into a tmpTbl, then qry on that for Q2, etc?
This is not a bad idea, it's much easier to debug/maintain this way. This is not unlike what jet does with the previous question--caching results of inner nest--all those Jetxxx.tmp files in the Temp directory are little .mdb's that hold these temporary tables. (however, you can't open these in Access, the structure is different than a true .mdb--I've tried this already)


. Other than start with a new design (out of the question for this non-profit), what do you recommend I do to keep the sucker going for another 7 years?
7 years??? That's a long time. I'd seriously consider a move to sql server or at least MSDE.

Other options include analyizing the data. Static tables (ie, data changes maybe monthly or even less frequently), can be made local, and have a short routine that updates the local versions upon open of the local .mdb. Heck, even weekly-updated tables are a candidate for this. This way, the one-time shot of schlepping this across the network, then the rest of the queries using these are less painfull.

Also, consider WHERE clauses that would use these local tables--if you have a JOIN on one of these, and a WHERE that is on one of the Joined fields, put the WHERE explicitely on the local table rather than the remote. Other little similar tricks will help.
--Jim

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top