Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Alter table VARCHAR to NVARCHAR .. SLOW

Status
Not open for further replies.

jballum

MIS
Jul 10, 2000
161
CA
Basically I am performing the following against a table with millions of rows in it and it is taking quite a long time.

EXECUTE IMMEDIATE 'ALTER TABLE <tablename> MODIFY UPPER(column) NVARCHAR2(60)'

Do you have any opinions of how to speed this process up? I am not sure how oracle is handling this in the background so other than dropping indexes and recreating after the command I can't think of anything else.

Is there a lot of logging going on?
Would exporting data out and then reimporting it be quicker?

Any opinions would be greatly appreciated.

Thanks

JOBA
 
Yes, JOBA, for some strange reason, if you "ALTER TABLE...MODIFY...NVARCHAR2..." this process (especially where millions of rows are involved), takes a painfully inordinate amount of time.

In the test that I just ran in your behalf (with a table containing 2 million rows), it took a painful 3 minutes 31 seconds for the ALTER; when I used the following alternative, the process took only 26 seconds (a 78% improvement):
Code:
create table <new_name> as select * from <old_name>;

truncate table <old_name>;

alter table <old_name> modify <col_name> NVARCHAR2(60);

insert into <old_name> select * from <new_name>;

drop table <new_name>;
Let us know your findings.

[santa]Mufasa
(aka Dave of Sandy, Utah, USA)
[I provide low-cost, remote Database Administration services: www.dasages.com]
 
Thanks for the tips, they helped.

Before: ~24 minutes (data and indexes exist)

After Test 1 - all data but no indexes existed on the table (indexes were added after the conversion is run) - 8 minutes 30 seconds

After Test 2 - no data or indexes existed on the table (data and indexes were added after conversion is run) - 6 minutes 55 seconds (to add data 4 minutes 50 seconds and indexes 2 minutes 4 seconds)

Not to be greedy (since you gave me some nice tips already) but is there anything I could do with regards to controlling rollback segments/ redo logs that might improve performance?

John
 
John,

Writing changes to redo logs is asynchronous, so you should not notice any performance issues writing redo logs. I typically do not attempt to interfere with logging (i.e., issue NOLOGGING commands when available) since the redo logs are what allow us to RECOVER a database to the most recently committed transaction in case of hardware failure.

Rollback segments are necessary for producing a consistent image 1) while making changes so that other processes can access committed data while you are making changes, and 2) in case you choose to UNDO your changes. There is nothing you can (or should) do to interfere with rollback-segment production.

There are, however, some rollback segment behaviours that explain why some rollback usage becomes costly and others are not so much:

1) Remember that rollback segments contain "pre-change" images of your data. Therefore, INSERTs are the least costly of rollback activity: that is, before INSERTing a row, nothing existed about a row, therefore the rollback entry for an INSERTed row is virtually non-existent. The costliest rollback is for DELETEs since the pre-change image for a DELETEd row is the entire row.

2) A ROLLBACK is a very costly activity since Oracle must access the rollback segment and re-write everything there back to the actual data blocks for modified rows. A COMMIT costs virtually nothing.

So, these are the bits of wisdom that I could share with your that may explain about rollback performance.

[santa]Mufasa
(aka Dave of Sandy, Utah, USA)
[I provide low-cost, remote Database Administration services: www.dasages.com]
 
Mufasa,

Thanks for your detailed explanation with regards to redo-logs and rollback segments.

Actually, what I was asking for was any suggestions for how to manipulate any Oracle processes (rollback / redo logs / storage) to speed things up temporarily to handle some maintenance that is going on.

For example: Let's say I am inserting 20 million records into a pre-defined table (indexes exist).

As of right now to speed things up I have tested and seen significant improvements from just inserting the records without doing anything.

1. Dropping indexes
2. Insert records
3. Recreate indexes

I was wondering if anything else could be done?

Dedicating a very large rollback segment to the table/indexes so few extents would have to be built on the fly?

Things like this? I am not an expert at the internal workings of Oracle and was hoping to get some quidance along these lines.

Thanks again for your help.

JOBA
 
JOBA said:
1. Dropping indexes
2. Insert records
3. Recreate indexes
Yes, this is a good practice when inserting very large numbers of records.
JOBA said:
Dedicating a very large rollback segment to the table/indexes so few extents would have to be built on the fly?
Having rollback segments large enough to accommodate your largest transaction(s) is vital. But doing so (having large RB segments available) has no speed/performance impact on Oracle since Oracle builds all rollback extents "on the fly", on an as-needed basis.



In review, the biggest impact on performance that you have mentioned so far in this thread is the "drop indexes...insert rows...re-create indexes" tactic; you, otherwise, have very little governance over performance of Oracle's rollback segments and redo logs.



[santa]Mufasa
(aka Dave of Sandy, Utah, USA)
[I provide low-cost, remote Database Administration services: www.dasages.com]
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top