Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

The Trasaction Log for database is full

Status
Not open for further replies.

speedks1

IS-IT--Management
Oct 22, 2005
1
US
We have a database with 3 million rows and we need to perform a update based on a selection criteria and the selection would update 450K rows out of these 3 million records. I get "The transaction log for database is full" error all the time when trying to update huge rows like this. my update command is like this
db2 update tablename set columnname1 = 'value1' where columnname2 = 'value2';

is there a problem in query? it works fine for small queries. I cannot extend 'where' with 'and' and some other column because the update queries will go in to hundreds. Any help is highly appreciated.
 
speedks1,

It sounds as if you could turn off the logging. See this thread to see it it helps: thread178-641780

Marc
 
I think that the default setting for the creation-size and number of primary logfiles is a bit small when creating a database. (3x250x4kB)
I got used to creating 10x2500x4kB for new databases and that seemed to clear the problem with 'the transaction log for the database is full' type of errors.
In my case this is with circular logging on all databases

Ties Blom
Information analyst
 
no need to turn off logging or so in V8 if you dont want,
Ties is right: increase the space.
In version 8: set number of secondary logfiles to unlimited if you don't want to worry and have no problem with filesystem space.
Otherwise: Increasing the logfilsiz parameter to something decent is one thing, I usually use a logsecond of 100 or so to avoid having always large logfiles. Logsecond parameter logfiles are only allocated as needed (e.g. for large updates), logprimary are preallocated and fixed.




Juliane
 
speedks1,

what size are your log files defined and do you have circular or archival logging. Essentially if your updating 450,000 rows of data in one go, without committing in between, then all of this information, is being written to the log files in one go and at the moment its more data than your log files can hold. The log files are used in case you need to recover the database or in this case, if your transaction rolls back it has to run through the log files to put the datbase how it was before you issued your update.

One option off the top of my head, is you could put a loop in a program, declare a cursor with hold, with the same where clause criteria and update each row, within the loop, also doing regular commits?

Cheers
Greg
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top