Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Question about table defragmentation

Status
Not open for further replies.

kpenza

Technical User
Jul 16, 2004
8
MT
I am running ids 9.30.uc2 on a linux machine, on one of other production servers. During maintaince job we noticed that there are a few tables that have over 110 extents.

Is there there a way to defragment the table without going through an export and an import ?

Any suggestions welcome.




 
Hi,

You can deal with these fragmented tables individually.

preparation:

database should be dormant

dbschema -d dbname -t tablename > tablename.sql

using your favorite method, determine what your initial extent / next extent size should be and edit tablename.sql

execution:

using dbaccess, unload the table
unload to "some_filename" delimiter "|"
select * from tablename

drop the table

execute tablename.sql to recreate the table & indexes

load the table from file "some_filename"
load from "some_filename" delimiter "|"
insert into tablename

final step, update table statistics

Good luck,

Mike
 
Hi,

load from "some_filename" delimiter "|"
insert into tablename ;

The above SQL statement works as a single transaction for the entire datafile, hence, may not work properly, and has the potential to escalate locks severly, resulting into lock table overflow conditions.

I would suggest to use the dbload utility to do the massive or bulk data uploads.

Regards,
Shriyan
 
Hi Shriyan,

Good point, I overlooked that 'cause kpenza did not indicate that actual size of the table, only mentioned there were 110 extents.

I've run into long transaction problems but I've never had a lock table overflow (I have locks set for 3.5 million).

I would deal with the long transaction problem by turning off transaction logging (ontape -s -N <dbname>) because the DB would need to be dormant to do the unload /load.

Also, remembering to turn logging back on.

Thanks - Mike
 
Thanks, for the suggestions the table has 5.4 million rows.

I wish to avoid unloading and reloading again the tables. As there circa fifteen tables which are quite large, and I do not have too much free filesystem space.


 
People,

I managed to find a method:

If an index such as that of primary key is created named

ALTER INDEX <idx_name> TO CLUSTER;
This causes table to be reorganized by the index and automatically defragements the table.
Afterwards ALTER INDEX <idx_name> TO NOT CLUSTER;

Otherwise create an index on a column as a clustered index

CREATE CLUSTER INDEX <idx_name> ON tab(column);
DROP INDEX <idx_name>;

This way I managed to solve the problem.
BTW there must be enough free space in the chunk for two copies to the table.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top