I have a table that I update in my database once a month. The data comes from a 7+ GB flat file I usually end up with well over 100 million rows in a table with 5 columns. My problem is that during this insert the transaction log blows up to over 100GB. After the insert I truncate the log. I have no real need to use any transaction log on this database...it is 2 tables one with historical one with meta data, no updates or inserts are done with the data and it is completely refreshed once a month. The file also grows every month, so I am running out of disk capacity for the log file during the insert. Is there any way to deal with this outside of setting recovery mode to simple? Is there a hint to not use transaction logging during this bulk insert statement? Thanks for any help.