Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

How to bulk load data into DB2 V7.2 ??

Status
Not open for further replies.

Guest_imported

New member
Jan 1, 1970
0
I want to insert large amount of record into DB2 V7.2
What is the fastest way to do it??
any tools can do?

Thanks
 
Where do your data reside in?In an other table? Salih Sipahi
Software Engineer.
City of Istanbul Turkey
openyourmind77@yahoo.com
 
My data is several files which contains many rows of INSERT statement, the total size of my files is 200mb. It is very time consume for me to execute these files.

That files is generated by a data source, if the files of INSERT statement is not suitable for load into database. I can ask to change the formating.

Please suggest a fastest method!

many thanks!
 
Try the insert sql for may rows i.e
insert into [TABLE NAME](field1,field2,...) VALUES
(value11,value21,...) ,
(value21,value22,...) ,
...
...
...
(valuen1,valuen2,...)
 
For high volume inputs, you could:

1 - Specify "commit;" every (say) 100 entries. This will take the pressure of temporary storage and cache resourses.

2 - Examine/modify tour config settings, I would suggest these as the main ones influencing the type if high-volume dynamic SQL you describe.

update db cfg for dbxxxxx using app_ctl_heap_sz 128 ;
update db cfg for dbxxxxx using applheapsz 2048 ;
update db cfg for dbxxxxx using buffpage 10000 ;
update db cfg for dbxxxxx using dbheap 1500 ;
update db cfg for dbxxxxx using dft_prefetch_sz 32 ;
UPDATE DB CFG FOR dbxxxxx USING dft_queryopt 5;
update db cfg for dbxxxxx using locklist 500 ;
update db cfg for dbxxxxx using logbufsz 32 ;
update db cfg for dbxxxxx using logfilsiz 5000;
update db cfg for dbxxxxx using logprimary 20;
update db cfg for dbxxxxx using logsecond 4 ;
update db cfg for dbxxxxx using maxlocks 40 ;
update db cfg for dbxxxxx using num_iocleaners 3 ;
update db cfg for dbxxxxx using num_ioservers 9 ;
update db cfg for dbxxxxx using sortheap 2048 ;
update db cfg for dbxxxxx using stmtheap 4096 ;


Of course the above settings can be hugely increased if your systems has the resources.

3 - If this task is "once off", save the original cfg values and restore them once the SQL has been finalised. Remember that logfilsiz could be critical if you are still using cyclic logging (because UDB could be waiting for log space).

4 - Once your data is in place and a backup has been done, you can use the "Prune" command to remove old logs and thus save disk space.
 
i might also suggest that you set a table lock before the inserts. by doing this you will reduce the overhead of maintaining locks. this would of course presume that you are the only one using the table at the time and would have to be done as part of a transaction. Paul
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top