Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

SQL Not Utilizing CPUs

Status
Not open for further replies.

compuveg

IS-IT--Management
Dec 3, 2001
307
US

I've got a 2 CPU license of SQL2000 running on Windows 2003. I've noticed that when I even do massive queries, like requesting a 300MB transaction log stored in a SQL DB, the CPU utilization on my processors barely goes over 30%. (Just FYI, I'm not confusing the SQL transaction log with the transaction log I'm talking about querying.)

I see the SQL server and its 1.5+ GB of allocated RAM listed in the process list. Its utilization does go up, just never very high, and queries are taking too long.

What have I done wrong? (I've tried giving priority to background tasks and applications with no difference.) Any tips?

Thanks--
 
If your pulling that much data off at once, your bottleneck might be your disks or RAID/SCSI card(s). What's your disk queue at, and your read + write bytes per sec while doing this operation?

What's the cost for the query? Check each step in the execution plan and see where the cost is. Is it IO or CPU?

Denny

--Anything is possible. All it takes is a little research. (Me)
 

I just ran a test, copying from one RAID5 logical volume to another. It was a total of 2 2.4GB files, for nearly 5GB of data. It transferred in 4 minutes 24 seconds for a throughput of over 1GB/minute. I must admit that's a little less performance than I'd expected from Ultra360 10k rpm drives, even with a wimpy HP 641 RAID controller.

Given this is a 17GB DB, does my array performance sound like the problem?

My test file copy operation is reading and writing, with writing being the more time consuming (traditionally) task of the 2. So theoretically my access time when reading data would be >2GB/Minute.

Would I get better performance if I just went RAID 0+1 on the logical volume where the DB files are stored? With it being backed up nightly I don't see where RAID5 is much of a benefit. I wonder if I should even use RAID on that logical volume. (Our contractor suggested RAID 5 though, so that's what I went with)

I do have the rebuild priority set for low, and the stripe size is 16k. I don't see a lot else to be tweaked. Looking back at the options, I'm thinking I should move my stripe size up to 64k. (and I wonder why I didn't do that from the git-go)

Any input? Thanks--

 
I'd still like to look at the disk queue, and the execution plan.

I'd stick to RAID 5 for the data files, if possible a RAID 0+1 for the Transaction Logs.



Denny

--Anything is possible. All it takes is a little research. (Me)
 
There are a number of variables there besides simply hard disk configuration. Indexes play a big part in database performance, as well as the arrangement of data on the disk subsystem. An index can have an effect on where data is stored. Small random transactions will naturally be latency bound, rather than bandwidth bound. Large ones (like you mentioned above) are going to be limited by the disk subsystem first of all, then by the memory subsystem.

A 17GB database is nothing to sneeze at, especially when this is being handled by one machine. The design of the database can play a large part in that. A 17GB database will run poorly on any system if not properly designed and tuned.

Look at the indexes first. If those appear to be reasonable (only on primary/foreign keys and frequently searched fields) then consider boosting your RAM. If that is a dual processor system it can probably handle 4GB or more. 1.5GB of RAM is kinda skimpy for a SQL Server machine, IMO.

Also make sure you are running optimizations and index reorganizations as part of your maintenance plan. Those can go a long way towards keeping related information near each other on the disk. It's like defrag... but different :) .
 
Wow, I've been given a lot to chew on.

I'm going to be looking at the disk IO, and at the DB's indexes and optimization. Probably be buying more RAM at some point, too.

Given my lack of experience in this specific area, I expect it to take a week or two for me to fully investigate all the options I've been given. I will definitely be posting my results back here though.

Thanks all-
 
On a lighter note though... more RAM certainly won't hurt, and it's cheap in proportion to the rest of the system. Finding the ins-and-outs of performance issues in database is, unfortunately, a lot more than "get the fastest hardware available".
 

Well, I emailed HP and asked what tips they could give regarding optimizing the speed of this controller's disk access times. They didn't really have much to suggest over using a larger stripe size.

I did get a chance to run some traces, and it definitely looks to me like our disk queues are the bottleneck. Ugh. Given the app using the DB is a commercial app capable of handling huge numbers of documents, I don't expect I can do much to optimize the indexes, etc.

Thanks all for the tips.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top