Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Primary File group is full

Status
Not open for further replies.

sujosh

Programmer
Nov 29, 2001
93
US
I am importing data from a 15 GB Oracle database into SQL Server using DTS. It bombs while doing the last 4 tables saying " Could not allocate space for object <table_name> in database <database_name> because the 'Primary' filegroup is full.

I have everyhting setup for autoincrement of log and db size on the datbase setup. I am doing the DTS import after executing this SP to truncate log so I dont have the problem of the log getting full.

exec sp_dboption @dbname = '<db_name>',
@optname = 'trunc. log on chkpt.',
@optvalue = 'true'

My questions are

1) Is it good practise to run sp_dboption before the DTS operation of importing 15 GB of data from Oracle into SQL Server?

2) Why am I getting the Primary file group is ful error inspite of my database being set to autoincrement of size and log?

3)This process is running on 3.2 GHZ 75 GB harddrive with 1GB RAM. When I check the free space when I get this error there is only 9 MB fre space available. When I see in the DATA directory I see my db log at 30 GB. Why do I see a log when I have run SP_Dboption to truncate log at check point?

Please advise how I can go about doing this

Thanks
 
I am not able to truncate the log as well. Any help is appreciated as how I can import without the log taking up my almost half of my hardrive? ALso why does it take 30 Gb of log space for a 15 GB DB?

Any suggestions would be helpful

Thanks
 
Your recovery model is probably set to "full" and some portions of log are still active (aka: uncommitted).
 
Thanks vongrunt for your reply!

Well if you see my earlier post I had mentioned that I ran the sp_dboption t truncate the log on checkpoint

exec sp_dboption @dbname = '<db_name>',
@optname = 'trunc. log on chkpt.',
@optvalue = 'true'

Can you thorw more light on what you mean by the recovery model being set to full and is some portions of the log active.

What shoudl I do to take care of this problem?

Thanks!
 
Ops, I ignored that. This dboption should set recovery model to 'Simple'. Just to be sure, check this in EM (database -> Properties -> Options ). Usually it helps to split huge DTS import into fewer smaller ones... what happens if you force CHECKPOINT manually?

To see info about portions of t-log, use "dbcc loginfo" (column Status).

What always helps: backup log :(. If you do it with NO_LOG or TRUNCATE_ONLY, database backup is recommended after that.
 
Yes the recovery model is set to simple. I am still not sure why the log get filled when I have the truncate log option set and why does it even try to fill the log when it is doing a DTS? Is it not a BCC equivalent style of loading data?

By having a primary key during a DTS import, will it cause the log to get filled?

Thanks
 
You still have to limit the batch size for your DTS because if you don't limit the batch size, the log will grow until DTS is finished. Edit your transform data task properties, go to "options" then on the "insert batch size" field, enter something like 100000. That is to commit 100000 rows at a time. This way, your log don't get full.

Secondly, you said your drive space has only 9MB free. That's the reason why you got the error:
" Could not allocate space for object <table_name> in database <database_name> because the 'Primary' filegroup is full.

If your database is set to grow automatically by 10%, then your 9MB free space is not enough. So, if your current DB size is 50000MB, 10% of that is 5000MB. You need at least 5000MB of free space in order for the database to expand successfully.


Andel
andel@barroga.net
 
Andel/vongrunt

Thanks for your reply. My insert batch size was set to 0. I have now set it as 100000. I am trying to understand the real need for the log in this case is that the 100000 rows will be stored in the log and when the 100000 th row gets inserted in the log it will commit to the db and ...

Will the 100000 rows be deleted from the log after it is committed?

Thanks
Josh
 
Thanks Vongrunt! Appreciate all your hel;p , also the article is very useful

Thanks again
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top