Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Improving server performance for update scripts

Status
Not open for further replies.

RRinTetons

IS-IT--Management
Jul 4, 2001
333
US
SQL Server 2005 Enterprise

I have to run a couple of vendor supplied update scripts as part of an application upgrade. One does a string REPLACE in XML columns in some 50,000,000 rows. The other creates a temp table for each table in the db (4.000+), writes the original data into the temp, drops the original and recreates it with some new data types and then writes the original data back. Right now it's taking 16 hours to complete the two of them. That means I'll have to take the resort's primary enteprise application off line for an entire business day to get it done. I'd like to avoid that! The vendor is trying to split the scripts up, and I'm exploring othe options.

Can anyone suggest any resource tweaks that might help speed up such updates? The box ha 4 GB of RAM a 10,000 RPM local drive and several LUN's mounted from a NetApp Filer. I already have it switched in Simple Recovery for the update.

I realize this is a very vague problem description, I'm just fishing for ideas, I don't expect anybody to come up with a way to make it complete in 15 minutes (although that would be cool!)



-
Richard Ray
Jackson Hole Mountain Resort
 
The update should be able to be batched if you have a a timestamp column you can update for each batch. I used to have this issue at a number of clients where an update would run for a day and if you batched it out it was a couple of hours. As for the changes to the table it depends on what is happening. If they are adding columns to a specified ordinal position then you need to do some what like they are which can be done better. Standard way is:

1) Create New Table
2) Insert data into new table from old table
3) Drop old table
4) rename new table
5) Apply indexes
 
One does a string REPLACE in XML columns in some 50,000,000 rows.

Awesome program you have there!

I don't know of anything to help you, but is it something that has to take the whole app offline? Could the app be set to readonly or such and have the database changes made on a mirror server somewhere (lab environment or otherwise)?
 
if this is a hotel system it doesn't suprise me. If it is I could probably guess the system. I am amazed at how poorly designed some of these systems are.
 
You say you have a 4GB box. Have you dedicated 3GB to SQL by adding the /3GB switch to the boot.ini file and restarted the SQL service?

That will give SQL 3GB (a bit more room for your buffer) and that will leave the OS w/ 1GB.

Is your TempDB on a seperate set of spindles from the Data and Log? If not, may want to move temp DB and grow it out to a decent working size.

Also, I see where you say the process creates a temp table per current table (4000+) and then writes the current data into the temp table and drops the currrent and rebuilds.

Instead of creating the temp table per current table and writing current data into the temp ... is it possible to just rename the current table?

That way you save the time of creating all 4000+ temp tables and writing all that data into them.

Then all you have to do after the rename is just create the new tables w/ the new columns and start writting the data.

Thanks

J. Kusch
 
...the /3GB switch to the boot.ini file..."

Can you give me a bit more on that? I don't know anything about that, but it sounds like something I _should_ know about.


-
Richard Ray
Jackson Hole Mountain Resort
 
Out of the box SQL install will allocate up to 2GB of memory to SQL if you do not throttle it back via setting it by a fixed amount (say 1GB).

By enabling the /3GB switch, you are allocating 1 more GB of available memory to SQL Server to use in its Buffer Cache.

Take a look the buffer cache hit ratio and the cache hit ratio. If they are high (95%+) the extra 1GB of memory will help.

Unless your system is very OS heavy, I would go with setting the /3GB.

Thanks

J. Kusch
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top