Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Idf has filled 40 GB hard drive!! What is this file? 1

Status
Not open for further replies.

ShawnF

IS-IT--Management
Oct 1, 2001
149
0
0
US
We recently had a few new servers installed at my workplace, and I am attempting to be the systems administrator and am trying to work on this problem. The vendor that installed our servers built a custom MS Access front end to work with a SQL server they setup and configured for us.

Well, after a couple months of running, all of a sudden the hard drive is full on the SQL server because of a specific Idf file that has grown to about 40GB. The programmer that wrote our software is not available right now and I need to do something about this ASAP. Our customer database is down until I can free up space on the hard drive. I've ran a complete virus scan on this server and nothing was found. The problem is coming from this one Idf file. I ran a search of all the files that have been modified since the day before this problem happened and only a handful showed up and aren't a concern.

What is this file, why has it grown so large, and what can I do to fix it (delete it)? I'm not familiar with SQL and would really appreciate some help.

Thanks,

Shawn F.
 
Oops, I forgot to mention the file name and location!

ourbusinessnameSQL.Idf

and is located in X:\SQLData\MSSQL\Data
 
What operating system and if not Windows 2000, which version of DOS.

Joe Logan Joseph Logan
jlogan@softsource.net
 
SQL Server is installed on a Windows 2000 Server.
 
The LDF file is the SQL Server transaction log file for one of your databases. The transaction logs are a critical and essential component of a SQL Server database. The SQL Books Online (BOL) contain information about the purpose of transaction logs. You can also read about logs at the MSDN website.


Transaction logs continue to grow until you backup the database or truncate the log. If you care about data recovery, you will backup the database rather than truncate logs.

You'll want to read about shrinking the logs after you determine how to stop growth. HINT: backup the database regularly and frequently. Read about shrinking logs in SQL BOL or at MSDN.


You may also want to read the article at
Terry L. Broadbent
FAQ183-874 contains tips for posting questions in these forums.
NOTE: Reference to the FAQ is not directed at any individual.
 
Afternoon,

create this procedure in the master DB. that way it can be accessable to all other DBs that may need to have their Tlog file reduced. once the procedure is created, run it out of Query Analyzer against the DB that has the large TLog. should reduce its size quite substantially.

======================== script ========================
CREATE proc sp_force_shrink_log
--------------------------------------------------------------------------------
@target_percent tinyint = 0,
@target_size_MB int = 10,
@max_iterations int = 1000,
@backup_log_opt nvarchar(1000) = 'with truncate_only'
as
set nocount on

declare @db sysname,
@last_row int,
@log_size decimal(15,2),
@unused1 decimal(15,2),
@unused decimal(15,2),
@shrinkable decimal(15,2),
@iteration int,
@file_max int,
@file int,
@fileid varchar(5)

select @db = db_name(),
@iteration = 0

create table #loginfo (
id int identity,
FileId int,
FileSize numeric(22,0),
StartOffset numeric(22,0),
FSeqNo int,
Status int,
Parity smallint,
CreateTime datetime
)

create unique clustered index loginfo_FSeqNo on #loginfo ( FSeqNo, StartOffset )

create table #logfiles ( id int identity(1,1), fileid varchar(5) not null )
insert #logfiles ( fileid ) select convert( varchar, fileid ) from sysfiles where status & 0x40 = 0x40
select @file_max = @@rowcount

if object_id( 'table_to_force_shrink_log' ) is null
exec( 'create table table_to_force_shrink_log ( x nchar(3000) not null )' )

insert #loginfo ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, CreateTime ) exec ( 'dbcc loginfo' )
select @last_row = @@rowcount

select @log_size = sum( FileSize ) / 1048576.00,
@unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00,
@shrinkable = sum( case when id < @last_row - 1 and Status = 0 then FileSize else 0 end ) / 1048576.00
from #loginfo

select @unused1 = @unused -- save for later

select 'iteration' = @iteration,
'log size, MB' = @log_size,
'unused log, MB' = @unused,
'shrinkable log, MB' = @shrinkable,
'shrinkable %' = convert( decimal(6,2), @shrinkable * 100 / @log_size )

while @shrinkable * 100 / @log_size > @target_percent
and @shrinkable > @target_size_MB
and @iteration < @max_iterations begin
select @iteration = @iteration + 1 -- this is just a precaution

exec( 'insert table_to_force_shrink_log select name from sysobjects
delete table_to_force_shrink_log')

select @file = 0
while @file < @file_max begin
select @file = @file + 1
select @fileid = fileid from #logfiles where id = @file
exec( 'dbcc shrinkfile( ' + @fileid + ' )' )
end

exec( 'backup log [' + @db + '] ' + @backup_log_opt )

truncate table #loginfo
insert #loginfo ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, CreateTime ) exec ( 'dbcc loginfo' )
select @last_row = @@rowcount

select @log_size = sum( FileSize ) / 1048576.00,
@unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00,
@shrinkable = sum( case when id < @last_row - 1 and Status = 0 then FileSize else 0 end ) / 1048576.00
from #loginfo

select 'iteration' = @iteration,
'log size, MB' = @log_size,
'unused log, MB' = @unused,
'shrinkable log, MB' = @shrinkable,
'shrinkable %' = convert( decimal(6,2), @shrinkable * 100 / @log_size )
end

if @unused1 < @unused
select 'After ' + convert( varchar, @iteration ) +
' iterations the unused portion of the log has grown from ' +
convert( varchar, @unused1 ) + ' MB to ' +
convert( varchar, @unused ) + ' MB.'
union all
select 'Since the remaining unused portion is larger than 10 MB,' where @unused > 10
union all
select 'you may try running this procedure again with a higher number of iterations.' where @unused > 10
union all
select 'Sometimes the log would not shrink to a size smaller than several Megabytes.' where @unused <= 10

else

select 'It took ' + convert( varchar, @iteration ) +
' iterations to shrink the unused portion of the log from ' +
convert( varchar, @unused1 ) + ' MB to ' +
convert( varchar, @unused ) + ' MB'

exec( 'drop table table_to_force_shrink_log' )



J. Kusch
 
Cool, thanks for all the wonderful responses! Basically, the problem appears to be the way in which our servers are backing up the database. Our backup software, Veritas, is backing up everything for the SQL server and it seems that every time a backup is done this log file grows. The log file has been shrunk back down for now, but I am having a tech come tomorrow to change SQL to use it's own backup agent for backing up the database. No one was available from the computer vendor we purchased this stuff from to help us out this morning, so I was a bit panicky since no one could use the database. I just don't know enough about SQL at this point, so the info everyone gave me was very helpful and much appreciated!

J. Kusch, I just don't know enough about SQL to even know where to begin with entering in the script you provided. Plus, this database really &quot;belongs&quot; to the computer vendor that made it in that they designed, built, and installed it for us and are really the ones to know what is going on. But I do really appreciate your help in the matter. Just knowing that this file is an important transaction log and should not be deleted was helpful in and of itself (because I was tempted to delete it).


Thanks!

Shawn F.
 
Morning Shawn,

thanks for the post. have your offsite tech add this to your server, or if your feeling a bit brave it is rather easy. he will just cut the above code, enter into Enterprise Manager, choose Tools from the menu then choose
Query Analyzer(QA). By default, when QA starts up its def database is Master, which is right where you need to be. just paste the code and execute it by hitting the the green arrow or F5. that step creates the stored procedure in the Stored Procedures folder in the Master DB.

Now you are ready to change the def database, that being Master, in QA to the database that has the large Transaction Log, known as a TLog or LDF. On the QA menu bar to the right, you will see a drop down text box labled &quot;DB:&quot; which should show Master right now. go ahead and drop it down and choose the &quot;ourbusinessnameSQL&quot; DB that needs to have the Tlog reduced.

The procedure you created in the prior step is called &quot;sp_force_shrink_log&quot; so, clear all of the code out of QA by hitting the Red &quot;X&quot; and type in &quot;sp_force_shrink_log&quot;. hit the green arrow or F5 and bingo ... it will start processing. it show iterations of how large the Tlog is and then its size after it knocks it down a bit. i have used this procedure on a 96GB TLog and it went through 423 iterations before it reduced it to 2MB. took about 4 minutes. I know this may be out of your realm a bit, but it is a great tool for your vendor to have and use.

another FYI is that the LDF is the Tlog, as we now know. the MDF is the database file itself. and in some cases you will see NDFs which are secondary database files that are sometimes used to place file groups in for better performance in larger DB shops. you have 1 MDF, 1 LDF by default and you can possibly have 1 or more NDFs.

Have a great week!

J. Kusch
 
James,

I will pass that code on to our vendor for sure.

Thanks,

Shawn F.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top