Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Massive increase in update time

Status
Not open for further replies.

rsai

Programmer
Oct 9, 2002
16
0
0
GB
Hi

I have a problem with sevral update scripts having massively increased in run time :

The scripts updates one column on a table by joining to another table of the same size and then joining to another smaller table the following is an example:

UPDATE AGG_10_TT_DATA
SET PUB_ISBN_CHECK_SUM = UI.ID_FIELD
FROM AGG_10_TT_DATA AD,
AGG_10_KEY_IDS AK,
UNIQUE_ISBN UI
WHERE AD.AGG_10_ID_CHECK_SUM = AK.AGG_10_ID_CHECK_SUM AND
AK.ISBN = UI.ISBN

The two AGG_ tables have 29121587 rows each and the UNIQUE_ISBN has 241905 rows.

When I ran these a couple of months ago on my production database they took no more than 2-3 hours to run, the data volumes have not changes significantly in the meantime.

I have made a copy of the database on the same machine (quad processor & 5gb memory) and when I run the scripts on this they are still running 30-40 hours later & I have not been able to let them finish, if indeed they would.


I have run a DBCC CHECKDATABASE on the database and on the tables involved and have no errors. I have backed up the database and run a maintenance job to optimise the indexes etc. to ensure the database is not corrupt.

Whilst the Job is runnning there is very little CPU activity but a lot of I/O so it appears to be disk bound ( the disk is held in a SAN)

The database has AUTOSHRINK set on it and whilst this job is running the AUTOSHRINK process is permanently blocked by the job.

The only difference I can see between the production database and this development version is that some of the data files on the development database have been placed on a different disk in the SAN, this disk is normally used for holding the backups. The two large tables are held on the normal data disk the lookup table is on the backup disk.

When the job is running in isolation the only other process which appears to be running on the server is bengine.exe which is running a backup of the backup disk. This is using no CPU and its I/O is fairly small when compared with SQL server, also the I/O is on the backup disk whereas the sql job is mainly accessing the main data disk.

The job whilst running does not seem to impact the performance of the production database to any great measure.

Any suggestions as to what might be slugging this job?

 
Ok... If it used to take 2 hours and it now takes 40 the issues I would look at are Fragmentation and Indexes...

To check fragmentation
USE

DBCC Showcontig (tablename) with TABLERESULTS ,ALL_INDEXES
Quick rule of thumb.. Scan Density should be as close to 100% as possible (if you are using SQL7 anyting after the With won't work, look up bol on showcontig..

If scan density is below 70% definitly rebuild that index! Read up on index managemnet on BOL..

Are your indexes up to date(statistics wise)
Rebuilding indexes won't hurt, but could take a while..

SO try
UPDATE STATISTICS TableName(indexname)WITH FULLSCAN


If I think of anything else I will post it here.. Good luck and let us know if you find anyting out.

Rob
 
Thanks for the post,

I checked the fragmentation and the scan densities are all as close to 100% as makes no difference. Likewise I just rebuilt all the indexes. In fact these are aggregate tables and I had only just rebuilt them data and all. To insert all the rows into both tables took only a few hours, the problem comes in updating this one integer column.

I will be moving the database to a new development server shortly, so I will see if anything changes in the new environment, and update this post.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top