Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Index Scan Density and best improvement.

Status
Not open for further replies.

oublie

Programmer
Jun 7, 2000
53
GB
Hi,

Im currently looking at improving the scan density on a DB table in SQL 7. The current density is at 42.75% and i have been unable to raise this through reindex or index defrags. I read somewhere that if instead of doing a reindex, if you actually drop the indexes and recreate them you can improve the scan density compared with what you get from a reindex. This of course could be scripted to occur when required. Has anyone any experience of using this technique and if so is there any benefit over a reindex or index defrag.

thanks.
 
Drop and Recreate the Index

Some of the disadvantages of dropping and recreating an index with either DROP INDEX and CREATE INDEX or ALTER TABLE includes the disappearance of the index while you are dropping and recreating it. As the index is dropped and recreated, it is no longer available for queries and query performance may suffer dramatically until you can rebuild the index. Another disadvantage of dropping and recreating an index is the potential to cause blocking as all requests to the index are blocked until the index is rebuilt. This technique can also face blocking by other processes, as the process of dropping the index cannot take place while other processes are using the index. Another major disadvantage of this technique is that rebuilding a cluster index with DROP INDEX and CREATE INDEX has the effect of rebuilding all of the nonclustered indexes twice. Once as the clustered index is dropped and the nonclustered index row pointers are pointed to the data heap and again as the clustered index is rebuilt and the nonclustered index row pointers are pointed back to the clustered index row locations.

Dropping and rebuilding an index does have the advantage of completely rebuilding an index which does reorders the index pages, compacting the pages, and dropping any unneeded pages. You may need to consider dropping and rebuilding indexes that show high levels of both internal and external fragmentation to get those indexes back to where they should be.

Recreate the Index With the DROP_EXISTING Clause

To avoid the processing it takes to rebuild the nonclustered indexes twice on a table that you rebuild a clustered index, you can use the CREATE INDEX with DROP_EXISTING clause. This clause will keep the clustered index key values, avoiding the need to rebuild the nonclustered indexes twice. Like the regular DROP INDEX and CREATE INDEX technique this technique can cause/face blocking problems and index disappearance problems. Another disadvantage is that using the CREATE INDEX with DROP_EXISTING technique also forces you to find and repair each index on the table separately.

Besides the advantages associated with the regular drop and recreate technique and the advantage of not having to rebuild nonclustered indexes twice using the CREATE INDEX with DROP_EXISTING clause can be used for indexes with constraints provided that the index definition exactly matches the requirements of the constraints.

Execute DBCC DBREINDEX

DBCC DBREINDEX is similar to CREATE INDEX with DROP_EXISTING, but it will rebuild the index physically allowing SQL Server to assign new pages to the index and reduce both internal and external fragmentation. DBCC DBREINDEX also has the ability to recreate indexes with constraints dynamically, unlike CREATE INDEX with DROP_EXISTING.

The disadvantages of DBCC DBREINDEX is that it faces the problem of causing/facing blocking and DBCC DBREINDEX is executed within a transaction so if it is stopped before completion, you lose all the defragmentation that had been performed.

Execute DBCC INDEXDEFRAG

DBCC INDEXDEFRAG (available in SQL Server 2000) reduces external fragmentation by rearranging the existing leaf pages of an index to the logical order of the index key and internal fragmentation by compacting the rows within index pages then discarding unneeded pages. It does not face the blocking problems of the other techniques but the results of DBCC INDEXDEFRAG are not as complete as the other techniques. This is because DBCC INDEXDEFRAG skips locked pages due to its dynamic nature and does not use any new pages to reorder the index. You may also discover that the time needed by DBCC INDEXDEFRAG is longer than recreating an index if the amount of fragmentation is large. DBCC INDEXDEFRAG does have the advantage over the other techniques due to its ability to work on defragmenting an index while other processes are accessing the index, eliminating the blocking problems of the other techniques.





Thanks

J. Kusch
 
Thanks J. Kuch,

A textbook answer but have you any experience of doing this and if so was there a substantial enough change in the scan density over doing a reindex to warrant doing a drop and recreate? (we have a scheduled outage on the system every sunday so blocking is not an issue)
 
Did you run "update statistic" everytime you reindexed them?
 
Yes the scripts we currently use were written to do all the normal admin including stats etc.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top