Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations John Tel on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Filegroup is full....

Status
Not open for further replies.

gbaughma

IS-IT--Management
Staff member
Nov 21, 2003
4,773
US
I'm trying to do a pretty large (1.2M Records) create... and I keep getting:

Code:
Could not allocate space for object 'dbo.WebSample'.'IX_ProjectID' in database 'NtsWeb' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

I have autogrowth turned on (I think... it says it is... starts at 500MB with an unlimited growth of 10%)

... in retrospect, Autogrow is turned on for the WebSample table... but I'm not sure where to turn it on for the filegroup (or if that's even the thing to do).

SQL Server Express 2008 using SQL Server Management Studio.

TIA!


Can someone tell me where I'm going wrong?


Just my 2¢

"What the captain doesn't realize is that we've secretly replaced his Dilithium Crystals with new Folger's Crystals."

--Greg
 
Ok, first of all, you might have something wrong with your computer, so, first thing u do is turn off your comp and turn back on. You might need to do it 2-3 times. If you still have problem with your memory allocation, then check how much memory u've got by opening from bottom left start a folder my computer and right-clicking anywhere in the folder my computer. Then choose option properties. You will find out whether u have got enough memo. If you don't, clean up the space.
Feel free to ask more questions.
 
  • Thread starter
  • Moderator
  • #3
Windows Server 2008
6GB Ram
337 GB Free on Database drive.

I really don't think that's the issue.

It's an issue with the PRIMARY filegroup in SQL filling up.


Just my 2¢

"What the captain doesn't realize is that we've secretly replaced his Dilithium Crystals with new Folger's Crystals."

--Greg
 
May be your files are overloaded in the temporary folder, check that. If that's ok. Then try modifying files to have greater capacity. Here is the script.

USE tableschema;
GO
ALTER DATABASE dbo
MODIFY FILE
(NAME = filename1,
SIZE = 20MB);
GO
 
  • Thread starter
  • Moderator
  • #5
I think I may have found it (trying the query again now).

My log file was HUGE (4GB). So I took the database offline, renamed the log file, brought the database back online (which re-created a new log file... 1mb) and I'm re-running the query.

So, the question becomes, how do I keep this from happening again? Is there a way to truncate the logfile, or tell it to do a FIFO for old entries or something?



Just my 2¢

"What the captain doesn't realize is that we've secretly replaced his Dilithium Crystals with new Folger's Crystals."

--Greg
 
If you create a file with a certain memory allocation limit, then you won't go pass that limit. You will know, then, when to recreate the log file over again. I prefer to do it manually.
Here is the script for creating a log file with a certain memory allocation.

use tableschema;
go
alter dbo
add file
(
name = filename1,
filename = 'c:\path\logfilename1.ldf',
size = filesizefrom1to5mb,
maxsize = specifiedsizelessthan337gb,
filegrowth = 5mb
);
 
  • Thread starter
  • Moderator
  • #7
I think that I may have found a hint to the FIFO bit....

If I set the logfile to an arbitrary size, such as 100MB, or 500MB or whatever, and turn OFF the autogrow, will it drop old entries on a FIFO basis?



Just my 2¢

"What the captain doesn't realize is that we've secretly replaced his Dilithium Crystals with new Folger's Crystals."

--Greg
 
  • Thread starter
  • Moderator
  • #8
It didn't like that either.

So, on this huge INSERT query that I'm running, the log file is filling up (4+ GB).

Is there a way to do this insert in chunks (batches)? I'm running it as a stored procedure, and something that I hope to schedule and automate.

You know what? I'll start a new thread instead....


Just my 2¢

"What the captain doesn't realize is that we've secretly replaced his Dilithium Crystals with new Folger's Crystals."

--Greg
 
For the file size, there are a couple of things that could be occurring.

1. the drive has run out of space - run master..xp_fixeddrives. That will show how much free space in on the drive.

2. the autogrow is not set properly. I prefer to set it in MB not %. Let's say you have it grow by 10%, if the current size is 1000 MB, then it needs 100 MB of room to grow. If there is only 99 MB or less, you will get that error, even though there is still empty space. If SQL Server can't expand the required amount it will send that error.

3. lastly, you may be doing inserts faster than the file can grow. Use Nastia's suggestion, setting the file to a large size and autogrow by MB not %. Having a large initial size will allow your inserts to happen without constantly autogrowing. (The other solution is the one you are asking about in the other thread.)

-SQLBill

The following is part of my signature block and is only intended to be informational.
Posting advice: FAQ481-4875
 
  • Thread starter
  • Moderator
  • #10
New thread started... it's too many records to handle at once....



Just my 2¢

"What the captain doesn't realize is that we've secretly replaced his Dilithium Crystals with new Folger's Crystals."

--Greg
 
If you refresh the logfile constantly, then it will drop off the old records, i think. You just have to recreate the log file over again at your own convenience. But large log file like 500 mb is a good idea.
 
Greg,

Do you ever back up your transaction log?

SQL Server express has a 4 gig limit on the size of a db. This includes the log file. It's best to set the log file to a "reasonable" size and then back it up regularly. Log files can (and do) fill up. By backing up the log file, you will keep the size down to something reasonable. A quick search of "SQL Server backup transaction log" ought to get you the syntax you need for backing it up. Depending on the size of your db and how quickly the log file fills up, you may want to schedule log file backup more frequently than your DB. If your DB is mostly reads, then you may want to schedule it to occur at the same time as your db.

-George

"The great things about standards is that there are so many to choose from." - Fortune Cookie Wisdom
 
  • Thread starter
  • Moderator
  • #13
gmmastros: I cleared the transaction log, and it filled up to 4gb again and overflowed when I tried to do a massive insert....

Now I'm just having problems with the massive insert... I realized that it was too many records at once, and if I chopped it up into smaller chunks to commit, it doesn't fill up the log.

However, the problem I'm having now is that no matter what I try, it keeps inserting the same records over and over again. I have another thread where I'm pulling my hair out over it.

Thanks though.... I'll keep a better eye on my transaction log.



Just my 2¢

"What the captain doesn't realize is that we've secretly replaced his Dilithium Crystals with new Folger's Crystals."

--Greg
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top