Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

SQL Log File (LDF)

Status
Not open for further replies.

nvhusker

IS-IT--Management
Feb 7, 2006
14
0
0
US
Hey all, I am not a programmer but I thought I might get some advice in here about our SQL 2000 database and the log file. First off, let me apologize if I am in the wrong forum.

Okay, we currently have a SQL database that is about 66 gigs in size. This is our MDF file. If we run a shrink on our database log file, we can get it down to about 760 mb. Here is the problem/question that I have. We will run the shrink on let's say day 1, when we come back on day 2, the LDF file can be as large as 13 gigs. 760 mb to 13 gigs in one day! What is going on? Where can I get the information about what is all going on with our database to make the log file that large so rapidly?

The problem is that we are trying to use a software called Legato Replistor to replicate our DB and numerous other servers over to our disaster recovery site. When our log file changes this much in such a short time, our pipe (T-1) can't seem to keep up with it and it never gets back to a replicated status. There is no doubt we need a larger pipe, like a T3.

But, how can I find out why one particular entry into the database can cause multiple entries in the log file. For example, if someone changes a person's SSN from our software application, will it update the database and how many possible entries could that add to the log file? Thanks in advance for any response.
 
How many possible entries would depend onthe database structure and if there are any triggers onthe table being entered into.

More than likely though what you have is one or more tables which are being recreatd constantly from imported data rather than user entries.

Questions about posting. See faq183-874
Click here to help with Hurricane Relief
 
You can use SQL Profiler to see what commands are being executed on the SQL Server. You'll need to have sysadmin rights to run the profiler.

For every insert, update and delete that is done data is written to the transaction log. If you update a record in a table and that fires off a trigger which inserts a record on another table then updates the first table, that's three records inserted into the transaction log.

Denny
MCSA (2003) / MCDBA (SQL 2000) / MCTS (SQL 2005)

--Anything is possible. All it takes is a little research. (Me)
[noevil]
 
And BTW, if every day your log is growing back to 13 Gigs I would stop shrinking it. All you are doing is putting additional load on the disks for no reason.

Not to mention every time the log grows that's more data entered into the log.

Denny
MCSA (2003) / MCDBA (SQL 2000) / MCTS (SQL 2005)

--Anything is possible. All it takes is a little research. (Me)
[noevil]
 
Thanks for all the replies everyone. I have a question about setting the transaction log to not autogrow. Is this dangerous to do? Does it have any impact on being able to restore the database if we need to in the future? I guess what I am looking for is a way to not impact the integrity of the database but at the same time, keeping it from growing so fast. It is really affecting our replication process since we have so many changes on the database in a single day, it is impossible for the replication software to keep up with the changes and transfer them over the T1.
 
SQL Server won't ask to you increase the log file. It will give you an error saying it ran out of room and the job/transaction will fail.

So, if you don't allow the log file to autogrow you have to do one of two things.

1. make the log file large enough for any transaction you might have.
2. increase the size of the log file before you do large transactions.


-SQLBill

Posting advice: FAQ481-4875
 
Your first step is to find out what changes are happening. If for instance you upload an import once a day that is large then your transaction file will always need to be able to grow to accommodate it. Setting the growth to off will only create more problems not fix them.

Now if the growth is spread through the day, you might get by with a smaller transaction log size by backing up the transaction log more frequently. But this won't work if the majority of the growth is from once a day large file imports. Your system will alwys work most efficinetly if it doesnot have to grow the log, so the log size should be set to accomodate the largest amount of data you need between backups.

if your replication software cannot keep up with the number of changes you either need better software or a larger line to use for data transfer. Nothing you are suggesting would reduce the number of changes that are happening except by breaking the ability to make the changes. More than likely your users will not tolerate this.

Consider this is you have a large import affecting your ability to replicate, is this really what needs replication? If not then put it into a separate database and set the log to simple on it. You can then upload this to the other site as a separate database right after the import. And the replication should work fine onthe tables in your original database through the day which are not changing in huge job lots.

Questions about posting. See faq183-874
Click here to help with Hurricane Relief
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top