Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

insert taking too long needs advice

Status
Not open for further replies.

oracle8

MIS
Sep 30, 2000
69
PH
hi,

I inserted 30000 records to a 30 column file and it took 2 hours to save. Any ideas on how to tune my database in order to reduce the 2 hour saving time. I'm using personal oracle 9i for windows 98.

Thank you so much

Rommel
 
There are a few things you should check:

a) Does the table have any triggers ? If so, try disabling them to see if that speeds things up.
b) Does the table have a lot of indexes ? It might be quicker to drop them and re-create them after the insert.
c) Are there constraints on the table. You could try disabling them or setting them to deferred.
d) How are you inserting the data ? Is this through some sort of front-end application ? Is the application running on the same machine as the database ? Is it doing the inserts one row at a time or does it use array inserts ?

 
How often or if at all do you do a commit during the 30000 loop?

Mark [openup]
 
guys,
The insert is running through an application and comitting every row inserted. I have 15 indexes, 3 triggers and one primary key constraint. I don't need to disable these because of some application concerns. I'm accessing the database in same machine.

Thanks

 
you can use the +append hint for direct inserts and turn logging off for table.

ie:

sql> alter table table1 nologging;
sql> insert /*+ APPEND */ into table1 values (1,2);

i would also commit say every 500 or 100 rows, as a commit will mean log writer is writing to log files on every commit increasing i/o.

Also make sure your log buffer is big enough and your logfiles are on fast disks and are big enough.

This is just a summary of things I would check, but as far as tuning is concerned there is plenty more you can do!

HTH

Sy UK
 
I agree with Sy - you're committing WAY too often, and this is probably killing your performance.
 
Hmmm.... It went faster for me when I did the autocommit imm. Maybe it's because no rollback segment are created.

You might check and see the size of the inserts and rollback segments. The larger the gap between commits, the larger the rollback segment required and the longer it will take to process.

 
Actually the speed of commit almost doesn't depend on the size of transaction. It make take longer to rollback it. The time for large update may be consumed for expanding RBS, thus for large transactions dedicated RBS are specified by SET TRANSACTION statement. Your RBS should be properly sized beforehand. BTW what is your hardware (CPU/memory), memory allocated for Oracle?

Regards, Dima
 
Whoa, Wichita,

The larger the gap between commits, the larger the rollback segment required and the longer it will take to process.

Two assertions...both inaccurate in this case:

Q1. When you are doing an INSERT, how much rollback does it create?
A1. Virtually none.

Q2. The total time it takes "to process" a COMMIT for a 1-row INSERT versus a 500-row INSERT is virtually identical. Therefore, the per-row cost of committing 1 row versus 500 rows is exorbitant. Oracle's algorithm for COMMIT processing bets that you will COMMIT instead of ROLLBACK. ROLLBACKs are relatively costly in Oracle, as well. If you disregard logwriter activity, COMMITs cost virtually nothing. Logwriter activity represents the cost of a COMMIT. Therefore, the more row activity you can bundle up in one transaction, the less per-row cost for COMMIT you have.

Carp and Sy suggestion to "bundle" more rows between COMMITs is definitely one way to improve overall performance.

I hope this helps set things straighter.

[santa]Mufasa
(aka Dave of Sandy, Utah, USA @ 17:28 (14Oct04) UTC (aka "GMT" and "Zulu"), 10:28 (14Oct04) Mountain Time)
 
I apologize. I ran into this problem on my legacy machine (DEC Vax) with Oracle 6.37. I remember because I got a royal chewing when it crashed.
 
Wichita,

Then you received an undeserved "royal chewing". Unfortunately, you were the scapegoat for someone else's problem if you were "packaging" multiple rows' activity into one transaction and the system crashed.

If, however, you NEVER committed until the end of an extrememly long-running transaction that filled up a rollback segment or its tablespace, then that is another story.

Bottom line: It is important to "COMMIT early and often", but COMMITing for every row change is costly.

[santa]Mufasa
(aka Dave of Sandy, Utah, USA @ 16:49 (14Oct04) UTC (aka "GMT" and "Zulu"), 09:49 (14Oct04) Mountain Time)
 
Guys,

Thank you so much for all the ideas you've contributed. I applied Sys suggestion and the saving time was cut to 1 hour but i think it's still a long processing. That's why i'm still hoping for more valuable ideas from all of you regarding this matter.

Thank you

Rommel
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top