Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

why did this DW datapump load fail? 1

Status
Not open for further replies.

raygg

Technical User
Jun 14, 2000
397
0
0
US
I am new to loading data warehouses databases.

I hung after 6 hours loading (after 2.1 billion rows) a large fact table (2.4 billion rows) and got the error message below. At the time of the hang the tablespace dbf file was 67,108,848KB and there is 164GB of free space on the drive which is being managed with RAID5.

statement in resumable session 'RAYGG.WEATHER01_NUM10_FACT_ALL.1' was suspended due to
ORA-01653: unable to extend table METARDW.WEATHER2 by 4096 in tablespace FACT_TS

This is running on windows 32bit 2003 server with 3gb RAM. This is a brand new use of this server and hard drive - it was just wiped and reimaged before I put this databse on the machine so defragmentation is not an issue.

There were two tables specified in the parfile. After this table hung the impdp loaded the second smaller table as shown below in the impdp log.

I am stumped - there is plenty of room left on the drive, the next table loaded ok, there is no space limit in the table definition, there is no space limitation on tablespace.

____________________________
Here is the table desc
____________________________
DROP TABLE WEATHER2 CASCADE CONSTRAINTS;
CREATE TABLE WEATHER2
( DIR CHAR(3 BYTE),
SPD INTEGER,
(a bunch of other rows)
)
TABLESPACE FACT_TS
PCTFREE 10 PCTUSED 75 INITRANS 1 MAXTRANS 255 LOGGING
STORAGE(PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1);
____________________________________________
here is the tablespace create statments
____________________________________________
CREATE TABLESPACE FACT_TS DATAFILE 'E:\app\metardw\oradata\metardw\DATAFILE\FACT_TS.DBF'
SIZE 512M REUSE AUTOEXTEND ON NEXT 512M
EXTENT MANAGEMENT LOCAL
BLOCKSIZE 16K
AUTOALLOCATE
DEFAULT COMPRESS FOR OLTP;

_____________________________
This is the log from impdp
_____________________________
C:\>impdp raygg/raygg parfile=C:\Oracle\DBAScripts\imp_WEATHER01_num10_FACT_ALL.
PAR

Import: Release 11.2.0.1.0 - Production on Mon Nov 29 22:15:52 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Produc
tion
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "RAYGG"."WEATHER01_NUM10_FACT_ALL" successfully loaded/unloaded
Starting "RAYGG"."WEATHER01_NUM10_FACT_ALL": raygg/******** parfile=C:\Oracle\D
BAScripts\imp_WEATHER01_num10_FACT_ALL.PAR
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
ORA-39171: Job is experiencing a resumable wait.
ORA-01653: unable to extend table METARDW.WEATHER2 by 4096 in tablespace FACT_TS

ORA-39171: Job is experiencing a resumable wait.
ORA-01653: unable to extend table METARDW.WEATHER2 by 4096 in tablespace FACT_TS

<<<The above error messages occur 24 times in total>>>

ORA-39171: Job is experiencing a resumable wait.
ORA-01653: unable to extend table METARDW.WEATHER2 by 4096 in tablespace FACT_TS

ORA-31693: Table data object "METARDW"."WEATHER2" failed to load/unload and is b
eing skipped due to error:
ORA-02354: error in exporting/importing data
ORA-30032: the suspended (resumable) statement has timed out
ORA-01653: unable to extend table METARDW.WEATHER2 by 4096 in tablespace FACT_TS

. . imported "METARDW"."WEATHER3" 128.2 KB 2189 rows
Job "RAYGG"."WEATHER01_NUM10_FACT_ALL" completed with 24 error(s) at 07:53:37


I was testing the loading with a smaller volume of the dmp file using query statements that limited the number of rows loaded by table using the TABLES and QUERY options as

tables=dwuser.hc_dim
QUERY=hc_dim:"WHERE rownum < 20000"

THis worked very effectively but the reverse will not I tried loading rownum > 19999 and 0 rows were loaded.

So even after I allocate another datafile for the tablespace (not sure what else to do) I do not know how to restart the load - I only can assume I must restart from the beginning. The only other alternative I can think of would be to create a unique primary key and rerun the load anticipating the load would refuse loading all duplicate keys but then would resume loading after the load finds the end of the previously loaded rows. But I don't know if this would take almost as long as the original load. The big dif is that it skips all those writes of the tablespace.

Any help would be appreciated.
 
does the user in question have quota unlimited on the tablespace? If not, once the user's quota is reached, progress will come to a shuddering halt.

Regards

T
 
The import log indicates that this error is resumable, so you should be able to restart it from where it left off. To do so, try the following.

1. Query catalog view dba_datapump_jobs to see what jobs exist. You should see one with OWNER_NAME='RAYGG' and JOB_NAME='WEATHER01_NUM10_FACT_ALL'.

2. Attach to the interrupted job.

Code:
expdp raygg@your_db attach=weather01_num10_fact_all

3. From the attached session, restart the job from where it left off.

Code:
Export> start_job

If all goes well, you will be able to finish the job without cleaning up from the failed import.
 
Of course you should attach using "impdp", not "expdp". Sorry for the lapse.
 
Quotas for users were not set up. I since have given the schema owner of the tables in the TS unlimited quota. I tried to load another ts and loaded one table with 200M rows and then the next table failed trying to extend. same circumstances as the problem above.


 
Re - trying to restart - for some reason you can see the weather2 table stopped loading and the the 2nd and last table requested in the impdp loaded and the job ended. Once the job ends you cannot restart it I found out.
 
to all who contributed - I think I know why the first job ended prematurely. I only had allocated 1 datafile with a 16k tablespace blocksize. Oracle limits a datafile to 4,194,304 blocks - which limits the datafile size for a 16k blocksize tablespace to 67,108,864KB. The datafile size after it stopped loading was 63.9 GB (68,719,460,352 bytes). If the blocksize is 8k the max datafile size is 32gb.

I added autoextend with maxsize unlimited to the tablespace create statements but I am not sure of the coding to specify a 32gb maxsize per datafile. I know I can go to 64gb but since this is on windows I want to keep the datafiles to 32gb max.

So the question is how to I specify a datafile maxsize and multiple datafile names on the create statement? Or is the answer you can't - that you have to do alter tablespace add datafile for each datafile to be added.

Any help would be appreciated.
 
As you might guess, it's easy to specify multiple datafiles in a tablespace create statement. It's also possible to specify differing initial size and maximum size for each datafile. An example of the syntax is

Code:
CREATE SMALLFILE TABLESPACE "RAYGG_TS"
DATAFILE
  'fully qualified path and file name to file 1 here'
     SIZE 500M REUSE AUTOEXTEND ON NEXT 10M MAXSIZE 1000M, 
  'fully qualified path and file name to file 2 here'
     SIZE 100M REUSE AUTOEXTEND ON NEXT 10M MAXSIZE 250M
...
 
Karluk - have a star - that's great.

Now here's the next question.

I have experienced the problem of orphan dbf files. I've run this script and sometimes the drop statement drops the tablespace but not the datafile. So I am left with a datafile that belongs to no tablespace. This has happened in a windows server and I cannot delete the dbf in dos - the error is that another porgram is using the file. I just think windows is confused.
DROP TABLESPACE SM_DIM_TS INCLUDING CONTENTS AND DATAFILES;
CREATE TABLESPACE SM_DIM_TS
DATAFILE 'E:\APP\METARDW\ORADATA\SM_DIM_TS.DBF'
SIZE 64M AUTOEXTEND ON next 16M MAXSIZE 32G
BLOCKSIZE 16K EXTENT MANAGEMENT LOCAL
autoallocate logging
DEFAULT COMPRESS FOR OLTP
online segment space management AUTO;
THe only solution that seems to work is to shutdown the db, delete with a dos cmd, and restart the db. Fortunately this is not a production db.

So is there some other way to delete the orphan dbf without shutting down the db?
 
What you are doing is supposed to work, so the fact that the datafile is not always getting deleted means that you are running into a bug. I see from various references that this is a known bug on Windows platforms. The most popular suggested work-arounds are to either take the tablespace offline before dropping it, first dropping all of the objects that reside in the tablespace, or both. So I would try executing the following statement before your "drop tablespace" command and see if it makes the datafile cleanup more reliable.

Code:
ALTER TABLESPACE SM_DIM_TS OFFLINE;
 
Thanks for the tip about taking it offline.

The bug you mentioned - is it localized to a verion such as 11gr2(which is what I am running) or is it present in multiple verions of the oracle db?

Nevertheless - do you know how to get rid of the orphan dbf w/o shutting down the db - deleting this file using an OS cmd, and restarting?
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top