Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Transaction log file eats all drive space while importing data into db 1

Status
Not open for further replies.

SA23

Programmer
Oct 2, 2001
7
AU
Hi everyone,

When we are importing data into the db (SQL Server 2000) the transaction log file for this database grows and eats all available drive space (1 Gb). This causes the import to be rolled back.

The thing is that this data must be imported in an atomic transaction (all processes started under the transaction are to be committed or roll backed). The data is imported from a text file. Java program reads a line(record) from the file, converts it to the insert statement and inserts this record into the db (using JDBC API). One line is associated with the three insert statements and a few update statements. Importing twenty or thirty lines is OK. However, the above problem arises when a file has several hundred lines.

We restore our database using a backup file (we are not using transaction log files to restore a database).

We have set up a maintenance plan that "Removes unused space from the database files (if database size is more than 50 MB). Also we have set up "Auto Shrink" option on. Obviously it cannot shrink the transaction log while the actual transaction happens.

We would greatly appreciate if you could give a piece of advice on how to overcome this problem.


Thanks a lot.
 
When you say a record line - is it some coded instruction set or just a data record based on which you do an insert/update. You can consider the following (preferably using a stored procedure)

1. If your insert/update has the same logic and only the data values differ, then you can put all the line records in a temporary table and use this table to do insert/update in a single batch. Start your transaction only after you have populated the temp table.
2. If the logic is more complicated and each line records may have a different processing logic or different target tables, you can work on the same lines (assuming these ops work on a single user mode). Keep populating temp tables with the final inserted / updates values in an autocommit mode and start the atomic transaction only when you start inserting/updating from these temp tables.

The idea is to keep your transactions as small as possible without losing out on your functional requirements.
I would also suggest that you use the Java API to only populate a temp table with the line records and write the remaining funtionality in a stored procedure which you invoke in your Java Code.

RT
 
From Books on line
Before doing bulk copy operations, it is recommended that you set the recovery model to bulk-logged if you usually use full recovery. This will prevent the bulk copy operations from using excessive log space and possibly filling the log. However, even with bulk-logged recovery, some transaction log space will be used. You may want to create transaction log backups during the bulk copy operation to free up transaction log space.

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top