Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

error transaction log full when using import command

Status
Not open for further replies.

ddiamond

Programmer
Apr 22, 2005
918
US
I'm executing the following command:

import from j:\gto\data\policyendorsement.del of del
messages c:\load.msg
insert_update into geniusdata.policy_endorsement;

This file contains about 1.3 million rows. I'm getting the error message "Transaction Log Full". I assume this is because I'm importing too many records without committing.

Is there anyway to set a commit frequency? I don't want to turn on auto commit, because committing after every insert/update would be way too slow. But it would be nice to commit after every 5000 records or so.
 
For large amounts of data, LOAD is a better option than import. Import does inserts while LOAD writes data directly into the database. No logging done during Loads. Load is faster than import.
 
That appears to be true. I tried a load, and it took 1 and a half minutes. I've restarted the import, and it's already been running for an hour, and not finished yet. But I need to update records that already exist and append records that don't. As far as I can tell, load does not have this option whereas import does. Any suggestions? Would it be any faster to load it into a temporary table and then use the merge command?
 
Merge is one option.

Look into "ALTER TABLE .....ACTIVATE NOT LOGGED INITIALLY" [ofcourse, it comes with a price : ) You cannot recover these tables. ]
 
Thanks for the suggestions. I set the commitcount to 100000, and the import took 30 minutes. I then altered the table with not logged initially, and re-imported. It now takes 25 minutes. So the not logged initially did not give the import a significant performance boost. What exactly does not logged initailly do? If it completely turned of logging on that table, I would have expected the import to take the same time as a load (1 minute), but the import is still an order of magnitude slower.

ALTER TABLE GENIUSDATA.POLICY_ENDORSEMENT ACTIVATE NOT LOGGED INITIALLY;

import from c:\PolicyEndorsement.del of del
commitcount 100000
messages c:\load.msg
insert_update into geniusdata.policy_endorsement;
 
Here is another option to try :) See if this is faster.
1)LOAD data into a temp table.
2)Using db2move "export" the data from this temp table.
3)Again using db2move "import insert_update" the data into your table.

In case you are not familiar with db2move, I am listing the steps in a little more detail. You can look up the syntaxes if there are errors.
1)
a)create temp table :
create table tmp_table like GENIUSDATA.POLICY_ENDORSEMENT
Note:Make sure this temp_table has a PRIMARY key.
b)LOAD data into tmp_table using LOAD command.
2) create a NEW folder[easy to locate the files] and execute all the commands from now on from that folder.
db2move db_name export -tc GENIUSDATA -tn POLICY_ENDORSEMENT
This will create 4 files :
db2move.lst,EXPORT.out,tab1.ixf,tabnnn.msg
Now, EDIT db2move.lst - change the table name from "TMP_TABLE" to "POLICY_ENDORSEMENT"
3)
db2move db_name import -io insert_update

 
Thanks rbod. I'll take a look at d2move. I did found one technique that appears to be fairly fast:

Load data (replace into) from file into TMP_TABLE1 and TMP_TABLE2.

Load (insert into) from cursor into TMP_TABLE2 all records from policy_endorsement that are not in TMP_TABLE1. At this point TMP_TABLE2 will have all new records from TMP_TABLE1 and old records from Policy_Endorsement.

Finally, load (replace) with cursor from TMP_TABLE2 into Policy_Endorsement.

The only problem with this approach is that everyday I would be loading about 5 million records instead of importing 1 million, but loading 5 million records is still much faster than importing 1 million.



 
generally: import is going through the bufferpool, load is not using the bufferpool but going straight to the tablespace files (thats why upto V7 the whole tablespace used to be logged), that makes the difference.

Import: Not logged means it is not logged, so when using that you actually do not need to set the commit count, since your transaction logs will not be written.

When using load or import with not logged or db2move with the load option, you are not recoverable and need to take a backup afterwards.


Juliane
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top