Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Need to de-frag entire production DB 2

Status
Not open for further replies.

yakum0

IS-IT--Management
Jul 27, 2004
12
GB
I would like to try something but need to check first if it is going to work.

We urgently need to reorganize/defrag our production database server but cannot get sufficient down time. The Oracle version we are using is 8.1.7.4, running on HP-UX 11 unix operating system.

However I had a thought. I currently have a spare machine with sufficient RAM, CPU and disk capacity to restore the Production Database server onto. Then I could simply do a de-frag (ie export/import) of the entire DB on the restored system, leaving the Production DB server operational. Once de-fragged the restored system could be brought up to date by manually applying copies of the archived redo logs off the real Production database server. The final step would involve restoring the de-fragged system back on to the actual Production Database server. Hope this makes sense.

Can anyone confirm if this is going to work?

If this is not going to work, please could you explain why. Also maybe suggest another way of completely de-fragging my production database server, without upgrading the Oracle version or buying any additional software, possibly making use of the spare machine I have?

Many thanks

Steve
 
This sounds a bit iffy to me - I can't say definitively that it won't work, but I'm not sure how you are going to apply the archive logs. It might work, but I would certainly recommend you try this on a throwaway database first!

The problem I see is that (1) you are going to have to throw away any data that is changed between the time you export and the time you import and (2) getting rid of this data is ALSO going to be in your archive logs. Consider this scenario:
You export your sales table.
Several sales get recorded before the import.
You truncate/drop the table in preparation for the import. This gets recorded in the redo/archive log.
You then import the old data (which does not have the new sales data).
You then recover the database. It applies the sales records (good) and then removes all of the data from the table (bad) and then loads all of the old data back in (could be worse). I may be wrong on this, but it doesn't sound too good!

Would it be possible to maybe reorganize one tablespace at a time? That way you could reorganize incrementally and over time get rid of the fragmentation.
 
On further reflection, you could probably do an incomplete recovery. That way you could recover to the point just before you drop/truncate your tables. The changes would be applied and your defrag would be accomplished.
 
Yakum,

I have another idea that will both defrag your tables and indexes, but also defrag your tablespaces. I have done this with great success on a badly fragmented, 24-tablespace, 41,756-table, 75,603-index, 85-schema, 200 Gigabyte instance. This process runs remarkably fast -- about 6MB/second depending upon your server...Much, much faster than export/import.

Here is an overview of the process. For each tablespace:
1) Create a new, well-behaved (LMT) tablespace as a target for each of your existing tablespaces.
2) MOVE (not export/import, but MOVE) your existing tables and REBUILD your indexes into the new, pristine tablespace.
3) DROP your old, fragmented tablespace and reclaim its disk space.
4) ALTER your users' DEFAULT TABLESPACE to the new TABLESPACE.
(Note: You must still export/import Tables that contain LONG columns.)

Here is the code to handle the hard parts of the above process:

1) Create a new, tablespace(s) that will replace the old tablespace(s):
Code:
CREATE TABLESPACE <name> DATAFILE <'file_name'> SIZE <nnM>
AUTOEXTEND ON NEXT <nnM> MAXSIZE <nnnnM>
EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

2) Move your tables and indexes from the old tablespace to the new tablespace. The best/easiest method is to use a SQL script that creates a SQL script to do all your moving/rebuilding for you:
Code:
set echo off
set feedback off
accept old_tsname prompt "Enter the name of the old (source) tablespace: "
accept new_tsname prompt "Enter the name of the new (target) tablespace: "
set pagesize 0
set verify off
set trimspool on
spool TempMove&old_tsname..sql
prompt Set echo on
prompt set feedback off
select 'alter table '||owner||'.'||segment_name||' /* size: '||sum(bytes)||
' */ move parallel nologging tablespace &new_tsname;'
from dba_extents
where tablespace_name = upper('&old_tsname') and segment_type = 'TABLE'
group by owner,segment_name
order by sum(bytes) desc,owner, segment_name
/
select 'alter index '||owner||'.'||segment_name||' /* size: '||sum(bytes)||
' */ rebuild parallel tablespace &new_tsname;'
from dba_extents
where tablespace_name = upper('&old_tsname') and segment_type = 'INDEX'
group by owner,segment_name
order by owner, sum(bytes) desc, segment_name
/
prompt set feedback on
spool off
prompt
prompt Wrote 'TempMove&old_tsname..sql'
prompt
set pagesize 354

3) If you have any tables with LONG columns, they will not "MOVE"; you must export them, then import them. The following script does the following:
a) Creates scripted commands to export any remaining tables in the old tablespace that did not move as a result of the previous script. (Resulting script: "TempExp<old_tsname>.bat")
b) Creates scripted commands to drop the remaining tables from the old, tablespace and to modify the DEFAULT TABLESPACE for users from the old tablespace to the new tablespace. (Resulting script: "TempDropLongs<old_tsname>.sql")
c) Creates scripted commands to import the objects dumped into "TempExp<old_tsname>.bat". (Resulting script: "TempImp<old_tsname>.bat")
Code:
set pagesize 0
set trimspool on
set linesize 500
set feedback off
set verify off
set echo off
accept tsname prompt "Enter the tablespace name that holds the tables with LONGs: "
accept newtsname prompt "Enter the new, target tablespace name: "
accept syspw prompt "Enter the password for SYS: "
spool TempExp&tsname..bat
select 'exp buffer=15000000 compress=n grants=y feedback=1000 consistent=y '
||'file='
||owner||'.'||segment_name||'.dump tables='||owner||'.'||segment_name
||' userid=sys/&syspw@'||instance_name
from dba_extents, v$instance
where tablespace_name = upper('&tsname') and segment_type = 'TABLE'
group by owner,segment_name,instance_name
order by sum(bytes),owner,segment_name
/
spool off
prompt
prompt Wrote "TempExp&tsname..bat"
prompt
spool TempDropLongs&tsname..sql
prompt set echo on
prompt set feedback off
select 'drop table '||owner||'.'||table_name||' cascade constraints;'
from dba_tables
where tablespace_name = upper('&tsname')
order by owner,table_name
/
select 'alter user '||username||' default tablespace &newtsname;'
from dba_users
where default_tablespace = upper('&tsname')
order by username
/
spool off
prompt
prompt Wrote "TempDropLongs&tsname..sql"
prompt
spool TempImp&tsname..bat
select 'imp buffer=15000000 grants=y feedback=1000 full=Y file='
||owner||'.'||segment_name||'.dump '
||' userid=sys/&syspw@'||instance_name
from dba_extents, v$instance
where tablespace_name = upper('&tsname') and segment_type = 'TABLE'
group by owner,segment_name,instance_name
order by sum(bytes),owner,segment_name
/
spool off
set feedback on
set linesize 180
prompt
prompt 'Wrote "TempImp&tsname..bat"'
prompt
4) Successfully run script "TempExp<old_tsname>.bat".
Successfully run script "TempDropLongs<old_tsname>.sql".
Successfully run script "TempExp<old_tsname>.bat"
5) Drop the old, tablespace.
6) From the operating system prompt, rm/erase files that supported the old tablespace.

You will still want to do this during off-hours to avoid moving objects that are otherwise "in-use". But it happens so quickly, that the unavailable time (during actual MOVE) for any objects is very low. You also do not have the risky business of trying to apply logs or something to try to synchronize the objects.

Let us know if this seems reasonable for you.

[santa]Mufasa
(aka Dave of Sandy, Utah, USA @ 15:33 (27Jul04) UTC (aka "GMT" and "Zulu"), 08:33 (27Jul04) Mountain Time)
 
Thanks Dave (alias Mufasa), your response is both insightful plus sufficiently detailed for me to implement.

I have just one problem. Our system is a SAP system and typically SAP systems have in access of 14000 tables!!!

That is why I was hoping that I could use the spare machine in some way to reorganise a copy of the database. Then after the reorg try and get the copy up to synch with the live system.

Regards

Steve

 
Yakum,

Keep in mind that your need to defrag the database is, as far as I can tell, a non-critical activity that certainly should be able to wait for your scheduled database maintenance. In the example I mentioned to you the database I reorganised had nearly 4 times as many tables and since I ran the MOVE scripts in parallel, it took only about 5.5 hours to reorganise the 200GB -- certainly doable in a scheduled-maintenance window.

Let us know what you decide to do and your satisfaction with the results.

[santa]Mufasa
(aka Dave of Sandy, Utah, USA @ 20:32 (27Jul04) UTC (aka "GMT" and "Zulu"), 13:32 (27Jul04) Mountain Time)
 
Since you have a second server/capaticy, What about using a standby database and then switching over to that?
 
DB,

I am concerned. If Yakum's objective is to defrag the database, and if that occurs on the "standby database" only, have you not now caused the "standby database" to become hopelessly "un-synch-able" with the redo logs from the original database? Maybe I'm missing something. Please correct me.

[santa]Mufasa
(aka Dave of Sandy, Utah, USA @ 21:15 (27Jul04) UTC (aka "GMT" and "Zulu"), 14:15 (27Jul04) Mountain Time)
 
Dave H.: Actually, the standby probably wouldn't help if the only reason is defragmentation.

yakum0: Why the need to defrag on a unix machine? What 'problem' is being addressed.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top