I know this sounds like a terrible idea, but it would solve a big performance issue for me. Does anyone have any experience or ideas on how to essentially bypass the transaction log when doing updates?
My application uses a SQL2000 database as a temporary processing area for a large batch update.
The ideal solution to me would be to configure the log file to point to a "dummy" drive that would not actually write the log, but would fake SQL2000 into thinking that it did. Second choice would be a RAM disk that basically overwrites itself and therefore seems to provide unlimited space.
The database and tables are too large (120 GB, 100 - 400MM rows) to fit the entire log on a RAM disk. I'm obviously not worried about recovering this database, it would be faster to re-process the update anyway.
Thanks for any ideas.
My application uses a SQL2000 database as a temporary processing area for a large batch update.
The ideal solution to me would be to configure the log file to point to a "dummy" drive that would not actually write the log, but would fake SQL2000 into thinking that it did. Second choice would be a RAM disk that basically overwrites itself and therefore seems to provide unlimited space.
The database and tables are too large (120 GB, 100 - 400MM rows) to fit the entire log on a RAM disk. I'm obviously not worried about recovering this database, it would be faster to re-process the update anyway.
Thanks for any ideas.