Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

"Transaction log almost (always) full"

Status
Not open for further replies.

GaryAshford

Programmer
Jan 2, 2002
28
GB
Hi.

I have just done a default install of Syabse ASE 12.5 on Windows 2000. I have got past the SYCONFIG problem and side-stepped the LC_ALL pitfall and have managed to setup a default database to do some testing!
Yet as soon as I have run about 100 or so SQL statements from my client application, the client application locks-up and I get an error message telling me that the transaction log in the master database is almost full.
I can dump the transaction log Ok and continue but I would rather not have to do this every two minutes.
Does anyone know how to just throw away the transaction logs when they fill up automatically? or better still - stop logging transactions?
(I gather that with other non-master databases, one can set the tranaction log to truncate automatically.)

Yours in frustration
Gary
 
Make sure you're not doing your testing in master. If you're really testing in your own database, master should not be affected like this. Otherwise you should increase the size of master to a reasonable amount. You can also add a simple threshold procedure to automatically truncate the log:


use sybsystemprocs
go

if exists (select * from sysobjects where name = "sp_thresholdaction")
drop procedure sp_thresholdaction
go

create procedure sp_thresholdaction
@dbname varchar(30),
@segmentname varchar(30),
@space_left int,
@status int
as
declare @error int

/*
** if the segment is the logsegment & it's the last chance, truncate the log
** (log segment is always "2" in syssegments)
*/
if @segmentname = (select name from syssegments where segment = 2)
and (@status&1) = 1
begin
/* print warning message to error log */
print "THRESHOLD WARNING: truncating log in '%1!'",@dbname

/* truncate transaction log */
dump transaction @dbname with truncate_only
select @error = @@error
if @error != 0
begin
print "THRESHOLD ERROR %1!: %2! TRUNCATE_ONLY TRAN DUMP UNSUCCESSFUL", @error, @dbname
end

end
go
 
If its a development environment, you should be able to turn the "truncate log on checkpoint" database option for the user database. So, as long as one single large transaction does not fill the log, it will just keep purging the tran log. Of course, you are limiting recoverability, etc, but for deelopment environments, its not out of the question. Production environments are whole another story...
 
Yeah, the adding of column and then defaulting is a pain since I had to to do this for 30 milion row table....

Several options -

- BCP data out and add column with the default on new column set. Bcp data back in using the format file to ignore the new column. Drawbacks are having to reload the data, and possible rebuild indexes. You could load data with indexes already created, and then let you update statistics run.

- When we did 30 million row table, we were able to remove secondary indexes, and then use the unique index to work through the table. Part of our key was area code, so we wrote scripts that looped based on area code, and then we parallized with having jobs running starting from 200, and jobs going down from 999. Of ocurse, we had deadlock logic in scripts. But as long as new coilumn not part of index, not too bad.

I wouldnt think an 8 million row table would take that long since its not very large, unles row sizes are pretty large forcing alot of page splitting....

-mjm
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top