Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

whats more efficient insert with indexes -or- drop indexes first

Status
Not open for further replies.

WiccaChic

Technical User
Jan 21, 2004
179
US
Is it quicker to insert a million records into a table that is already pretty big and has some serious indexing -or- drop the indexes at the front end, do the insert and then recreate the indexes?

Or is there an even better way? I already have logging off during the insert.
 
Hi,

The bulk insert of a million rows would be efficient if the target table is stripped off all it's constraints and indexes. Constraints like referential, primary, check, unique and defaults need to be validated upon the table being inserted with raw data. Prevailing indexes tax heavily on the B+tree mechanism to sort the data before the data being diverted to the index pages. The more the number of indexes, the taxing exertion would shoot up abruptly.

However, with the indexes are not active, you run under risk of wrong data seeping into you base table. But, after mammoth insert operation you can run select statement to validate or replicate constraints or indexes (unique), before forcing back the actual indexes, else the index creation or constraint imposition job may interrupted with integrity related errors.

The other viewpoint of this entire scenario, would be keep the constraints and indexes intact and execute the bulk insert, if the list of constraints and indexes are too many. If we knock them off before row insertion, we have to put them back, as soon as the the operation is over, which itself might take too long to accomplish. Your table may have several million rows, for which the index pages are already present and for the new rows they need to be created. So the volume of work is much lesser. In the dropped indexes case, the index need to be created for the entire table.

Another point to ponder on is that you may wish to increase the next extent size to an appropriate value before the bulk insert job. You may concentrate on elimination of indexes only, even though I have included the constraints in this task. In case you opt to remove the indexes, I would suggest you to use "object mode" enable/disable facility available within the SQL, which more easier and appropriate than deleting the base objects. example:

SET CONSTRAINTS FOR table_name DISABLED;
SET INDEXES FOR table_name DISABLED;

SET CONSTRAINTS FOR table_name ENABLED;
SET INDEXES FOR table_name ENABLED;

However, if you have not dropped the indexes in the recent past, it would be better and advantageous to drop them and recreate the fresh index pages, as it would yield to elimination of page interleaving.

IDS also supports a utility called High Performance Loader (HPL) which need to be configured before making it active. You can have more info from the following link:

HPL onpladm Utility:

High-Performance Loader User's Guide:

Regards,
Shriyan
 
vpshriyan,
doe HPL buy me anything that would lessen the need to manage the indexes differently? We have always stayed away from it because the admin that was here before me insisted that it was to complicated to use.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top