Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Slow SQL queries

Status
Not open for further replies.

andshr

Programmer
Mar 16, 2006
10
US
Hi:

During working hours, many of our SQL 2000 stored procedures run 3-5 times longer than on idle server. Sometimes you can see blocking, but more often not. CPU at 60-80%. Buffer cache hit ratio –99.98%. Would having 8 virtual CPU’s instead of 4 real ones (provided we’d upgrade to Windows 2000 Advanced Server) help? Maybe acceding more RAM (SQL uses 2Gb, but if we upgrade Windows, it can have 4Gb) will improve performance? Where to look for a bottleneck?
 
During working hours... probably concurrent locking.

Isn't 2 gigs a little bit tiny for hardware with 4 real CPUs?

------
[small]select stuff(stuff(replicate('<P> <B> ', 14), 109, 0, '<.'), 112, 0, '/')[/small]
[banghead]
 
This is not a simple question. There are lots of whole books written on performance tuning.

Some things to look at. Our queries run much faster on production during work hours if we use the (nolock) hint on all selects to very large tables.

Use of dynamic SQl can slow down performance. Cursors are the kiss of death for performance. Using not in instead of a left join can slow performacne. Asking for more columns than you need will slow performance. Indexing can cause issues. Use of the temp database instead of table variables in a stored proc can slow things down espcially if the temp database needs to grow. Inline scalar functions can hurt performance. Searches which use the like "%sometext%' cause the system to be unable to use the indexes and slow things down. We fixed alot of these by having an if statement sheck to see if an exact match was was and only running the wildcard search if one was not found. Sped up the whole system tremendously. Bad data types which require the user to have to conversions when doing a selct to do date math can casue issues. Restructuring to denormalize some things to avoid joins can help especially if the joined tables are large.

Another thing that can help with large tables is to pull out a subset of the information into a table variable or derived table and then join to the other tables.

there are just so many things to do to fix performance issues, i could go on all day. and some things fix specific circumstances but not others, so without n=knowing more, it's hard to say what exactly is casuing your problem.

Yes throwing more hardware at the issue will help inthe short term but probably will not solve the underlying root causse of the problems which usually is that your sps are poorly written from a performance perspective or your database design is flawed.

I could go on and on.

Questions about posting. See faq183-874
Click here to help with Hurricane Relief
 
Have you tried viewing the execution plan of some of the queries that run slower to see what can be done to tune them? What I've found is that proper indexing will resolve 75% of performance issues. Good programming per SQL Sister's comments resolves some more, and splitting indexes, data, log file and tempdb onto separate drive arrays even more. I always start/recommend indexing as the first cure before any hardware considerations. I support a 50 Gig database on a 4 processor box with only 6 gig of memory. I added about 450 indexes and the system which once died (literally) every day before the indexes now screams. Another customer with the exact same 3rd party software and pathetic out of the box indexes, has a machine with 8 processors and like 16 gig of memory has pathetic performance and their database is only about 20 gig in size. Their disk subsystem is much better than the configuration we have. (We have 2 drive arrays one of which is 15K drives, the other is 10K ). Whereas theirs is 4 drive arrays all with 15K drives.
 

Thank you all for your answers. Our 12 Gb database is heavily and intelligently indexed. But although our main tables have no more than 1mln records, our stored procedures have 5-8 inner joins in average. We also create temp tables often within sp – do we suffocate tempdb? What values of Performance Monitor Counters indicates problem with I/O? Would increasing RAM from 2 to 4 Gb help? Also, our CPU usage went from 40-50% 6 months ago to 80-90%, we are suffering… Should we just get more CPUs, if no bottleneck is found?
 
Consider using table variables instead of temp tables, they are faster and do not put a hit on temp db.

Another thing that can help when joining these large tables is to put whatever filtering criteria you have as part of the join instead of the where clause. This means fewer records are joined together.

Code:
select field1, field 2 from table1 join table 2 on table1.id = table2.Id and table2.client_id = 130
join table3 on table1.id = table3.Id and type_id = 114
should work faster than
Code:
select field1, field 2 from table1 join table 2 on table1.id = table2.Id and table2.client_id = 130
join table3 on table1.id = table3.Id 
where table2.client_id = 130 and type_id = 114

Of course each system is different, test yours to make sure a suggested performance improvement does in fact improve performance.

AS to your question about performanc emonitoring, this FAQ has some booksI found helpful. One is a download from the net and could help you a lot intracing down your problem.

Questions about posting. See faq183-874
Click here to help with Hurricane Relief
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top