Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Transaction Log on RAM disk?

Status
Not open for further replies.

magicrat

IS-IT--Management
Mar 18, 2001
9
US
I know this sounds like a terrible idea, but it would solve a big performance issue for me. Does anyone have any experience or ideas on how to essentially bypass the transaction log when doing updates?
My application uses a SQL2000 database as a temporary processing area for a large batch update.
The ideal solution to me would be to configure the log file to point to a "dummy" drive that would not actually write the log, but would fake SQL2000 into thinking that it did. Second choice would be a RAM disk that basically overwrites itself and therefore seems to provide unlimited space.
The database and tables are too large (120 GB, 100 - 400MM rows) to fit the entire log on a RAM disk. I'm obviously not worried about recovering this database, it would be faster to re-process the update anyway.
Thanks for any ideas.
 
On MS SQL Server 7 and 2000 you allways have the option to insert and/or update without loggin the sintax is dependent from the aproach you have to updates ex: if you use BCP
-b batch_size.
Specifies the number of rows per batch of data copied. Each batch is copied to the server as one transaction. SQL Server commits or rolls back, in the case of failure, the transaction for every batch. By default, all data in the specified data file is copied in one batch. Do not use in conjunction with the -h "ROWS_PER_BATCH = bb" option.
When bulk copying large data files into SQL Server, it is possible for the transaction log to fill before the bulk copy is complete, even if the row inserts are not logged, from the extent allocation logging. In this situation, enlarge the transaction log, allow it to grow automatically, or perform the bulk copy using the -b or BATCHSIZE parameter, and set the database option trunc. log on chkpt. to true using sp_dboption or SQL Server Enterprise Manager. Setting this option instructs SQL Server to truncate the transaction log each time it performs a CHECKPOINT, removing the log records for committed batches from the transaction log. Because only committed transactions can be truncated, this option does not free up space during the bulk copy operation if the -b parameter is not used because the entire operation is logged as a single transaction.

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top