Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Optimizing Query Performance

Status
Not open for further replies.

xbigblue

Programmer
Jun 20, 2001
39
0
0
US
Hello friends,

I have read somewhere, but cannot remember where, that there is a technique to optimize query performance; that Access (Access 2000 for me) does not necessarily optimize a query as it is written, and specifically if it is modified, unless one does some particular system function. It seems to me it was Tools/Compact and Repair/ but I am not sure.
(I searched on OPTIMIZE but found no helpful posts.)
My problem is a series of queries chained together to produce a report. It used to run fairly slowly but acceptably to produce a report. I made changes to the first and second queries in the series, tested them to be sure they did what I wanted, and their individual query performance was excellent. These were the only changes. Then I ran the series and the run time is disastrous. What took two minutes now takes about ten minutes. In this case Compact/Repair does not seem to help. Do any of you have recommendations?

Thanks,
xbigblue
 
Compact and repair is a tool for compacting and repairing your data. It doesn't in any way optimize your queries. Access has an optimization tool which tells you how to change the way you have set up your tables to optimize performance but it doesn't make any changes itself.
When stringing queries together to create a single output, the only way to speed up performance is to look at how you have linked them. If they individually work quickly then your problem is in the last query. I can't really give you a set of guidelines to doing this because there are too many tricks involved. Post the queries on tek-tips and myself or someone much wiser will more than likely be able to spot what you can do to speed things up.

A good place to start would be to look at the types of joins you are using.

Good luck.
 
Did you say you ran EACH one of them separatly and they were quick? I think Camelman is correct in suggesting you look at the types of joins. That often slows me up. Also, (and you may know this already) but if you have 1 query that is based on, let's say, 3 others, in the macro or VBA you only need to run the one query and the others will ofcourse run because you run the other. I don't have any other good ideas. :) Dawn
 
I found some other tips in "Mastering Access 2000"
1. use and display only the fields you need.
2. Sorting on non-indexed fields can slow things down.
3. Avoid using restrictive query criteria (and, or, not) on calculated and non-indexed columns.
4. Speed up queries by using (Between...And, IN and =)on indexed columns.
5. It is better to create "sub-queries" than use domain aggrigate functions to pull data from a table that is not in the query.

Also if you use the office Assistant and look up "optimizing queries" you will find a link to "ways to optimize query performance"

Hope this helps

Dawn
 
A last comment to add on to Dawns excellent advice.
Check that you are only sorting in the last query.
If you have any sorts in the queries that make up your final query they will only slow things down.
 
Access Jet has a sophisticated cost-based optimiser but like all databases it can't deal with more than one query at a time.

Jet generates plans (ie programs) to carry out your queries. It does not however have a feature to display how it is going to do this (compare eg DB2 which has explain)

The optimisation wizard knows the optimiser's alogorithms and will suggest changes to play to its strengths.

What I'm not sure - maybe other people can clarify - is whether Jet freezes the plan when you compile. If this is so, severe problems can build up as the data changes. mike.stephens@bnpparibas.com
 
all,
Jet will only recomile if a change is made to the sql. And I vaguely remember an undocumented tool that is similar to the Oracle Explain Plan or db2 Explain, that exists somewhere deep in the bowels of JET.

Otherwise, with JET the above advice on indexing/joins, etc is worth taking. I'll point out a few overlooked indexing tips:
If a field is part of a multi-field primary key, that field is *not* indexed on it's own, so joining on that field alone to it's counterpart in another table won't bring the performance that a separate index on that field will bring, so don't be fooled just because the field is shown in bold in the query grid.

A join on multiple fields can benifit from a multi-field index, but to do this in Access one needs to go into the 'indexes' box in table design view and type an index name in the first column, then in the second column type the table names, moving to the next row but leaving the index-name field *blank*.

And remember, the optimizer will typically use *one* index per table for it's plan, ie if your criteria is order_date and order_type, for instance, an index on each of these separately will do nothing more than an index on either one separately--once the first fetch is made (hopfully using the index returning the smallest set of rowid's), then the result set from that table is scanned in full to do the remaining selections--why read another index and get another result set and do an intersection on that when a scan of the initial, (hopefully smallest) result set would cost less? So resist the urge to 'over-index' to try and catch all the fields you join/sort/select on.

--Jim


 
Jim

With Rushmore the optimiser looks for multiple indexes.


Mike mike.stephens@bnpparibas.com
 
Thanks to all who responded and gave excellent advice. The notes on table indexes did give me some new insights. I had already applied most of the other thoughts given with no success. Having no better ideas, I fell back to a backup copy of the database and retried it. Performace was the same I expected. I will reapply the changes more carefully, obviously I broke something and there is just too much material to look at to find what I accidentally broke. More confirmation of the need to back up frequently.

Thanks again,
xbigblue
 
Hi xbigblue + friends

Never done this in access (cause its a bit clumsy) - but I have used #temp tables when writing SQL statements on massive dbs - to cut down on the problems cause by links.

Queries involving multiple tables and multiple operations may be solved in more than one way - meaning that links get progressively more demanding as the number of tables (or queries) increase. The demands increase exponentially.

By using action queries you should be able to 'manipulate' the data without any serous links in sight. In your case, create two tables from the two fast queries and then replace your last query with a query based on these two temp tables.

Who knows - it might work.


Stew

 
Mymou,

Thanks for the refresher on using the technique of creating "working" tables rather than joining queries. I already use it elsewhere in the application but had not thought of trying it here.

I will give that a try when other fires burning brighter have been subdued.

xbigblue
 
A quick update for all who have contributed to solving my slow-running report problem:

Performance that was as bad as 6 to 8 minutes to produce the complex report is now reduced to 30 seconds by using an intermediate table to save query results, cleaning up unnecessary grouping, dropping unnecessary sorting, and improving the joins where queries are still chained together. I also fixed a poor implementation of a subreport within this report that further crippled performance. The subreport was also able to use an intermediate table instead of chained queries.

It had been a case of "If it ain't broke don't fix it" but my resistance to reworking that long series of queries was worn down by the "hourglass of death" waiting for the report to appear. It really was broke.

So, to all who contributed, thanks!!
xbigblue
 
BnpMike,
On Rushmore, what I was referring to when saying one index is used was more conceptual to *all* optimizers--If you have a customer table, and query criteria is, say, Lastname starting with 'H', and customerType = 'Active'. In many cases, I've seen users, even 'power users', index the lastname and the customerType separately, thinking this will help.

But the engine, once it hit's the lastname index (which the optimizer hopefully will choose over the customertype index) has a set of rowids, say 10% of the records. Now, it wouldn't make sense to go hit the type index and return another set of, say 40% of the records. To intersect those two sets (including the time taken to fetch the other set from index) is much more expensive than to just scan the table based on the rowid's from the first set and select/reject rowids based on the customertype field.

The incorrect assumption here is that the result set has some sort of index on customertype as well, since the table did. This is what I mean by over-indexing. A composite index on Latname/customertype is a possibility, but maybe not practical, since there still may be little or no gain (espacially in the case of the Like clause).

In fact, Oracle goes further to state that even in the best scenario, a full table scan is faster than an index-scan plus a table fetch based on those rowids--if the index returns more than 20% of the table. Ie, in my above example, a full table scan would be faster than using just the customertype index, if that were the only index.
--Jim

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top