i have a multiload script running in production that updates 210 million rows on two tables, totally 420 millions and its running since 19 hours.does anybody know why its taking that much time ?
The number of secondary indexes has a dramatic effect on updates. If we get into extreme updates that is usually a contibuting factor. Is the session still removing rows for update? Use locking table for access and a row count to see if rows are being moved in or out.
We NEVER update to a table that large without dropping all indexes. We also prefer select insert into an empty table when possible to prevent LONGGGGGGG rollbacks.
thank you very much for sharing your idea with me.i have a small question for you , doesnt it take long time to drop indexes ,again create indexes and then collecting statistics for those indexes.
My application , table - 222 million rows
i am updating two columns based on some conditions .
so, which method would you prefer ?
Not 19 hours. We don't do much update, we do mostly inserts, so create new / drop old is preferred.
For our updates we have a 2mil table that updates a 350 mil table. We do drop secondary indexes, and the source update and target update tables have the same PI to reduce spool.
I have been told that the process for updates is to remove the record from the target table to the TJ file, update the record, then reinsert the record to the original. No record is updated until all records are moved to tj. Also if you have secondary indexes those are additional tables to perform the same process.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.