Changed autogrowth from 1 to 5000 unrestricted. The .mdb is 35.5 GB. There are only 2 tables in the database one contains the metadata and they are cleared before the monthly insert, but the size of this file will slowly increase over time. I will work on the Instant File Initialization and...
~223 million rows took 25 min to the raw table, and another 25 to the actual table. Used to take well over an hour for each. Huge performance increase :)
Worked great, thanks for the help. The bulk insert with BATCHSIZE, ROWS_PER_BATCH and TABLOCK did not add anything to the transaction log...Inserting from the raw table to the actual table with no index and using TABLOCK kept the log growth minimal. Much appreciated.
Here is the procedure...assuming no indexing on the tables.
IF EXISTS ( select id from sysobjects where name like 'raw_pden_prod' and xtype = 'u' )
DROP TABLE raw_pden_prod
GO
CREATE TABLE raw_pden_prod
(
ENTITY_ID varchar(max), PROD_DATE varchar(max), LIQ varchar(max), GAS varchar(max)...
Thanks for the response...Yes, I am using a BULK INSERT...So if I specify ROWS_PER_BATCH while in simple mode, will the log be truncated after each batch?
The database is in Simple recovery mode...sorry for the confusion I wasn't saying I did not want to put it in simple mode, just wondering if there was a way to prevent writing to the log at all, or minimizing what SQL Server writes to the log.
When I run the bulk insert command the transaction...
Thanks for the response...I figured as much since SQL Server must use the log to operate. The log file is on its own drive...I am going to split the file then insert smaller batches and truncate the log after each batch like you said. Thank you.
I have a table that I update in my database once a month. The data comes from a 7+ GB flat file I usually end up with well over 100 million rows in a table with 5 columns. My problem is that during this insert the transaction log blows up to over 100GB. After the insert I truncate the log. I...
Added row terminator '\n' to the bulk insert and worked with the smaller file. Seems to be working with the larger file as well. I thought the default would have worked.
I wrote a quick program to read through the first lines of the file and write them to a new file, which I posted above. When I try to bulk import that file I don't get any errors, but it says 0 rows imported.
There are 5 columns, ID, Date, and 3 columns with a decimal value. No large fields. I think in this instance each column is treated as a LOB, meaning the column object would have a max size of 2GB??
I am attempting to bulk insert from a CSV file to a table. The CSV is a 6GB file with 5 columns, I am unsure of how many rows are in the data, but it is over 10 million. Any Bulk Insert runs into an error "Attempting to grow LOB beyond maximum allowed size" Is there any other way to import...
SELECT ROW_NUMBER() OVER(PARTITION BY ID ORDER BY UPDATETIME DESC), *
FROM TBL
If I SELECT all with Row# = 1 that should be the most recent entry for that ID. Then perform update from here.
I have a staging table with an update time column and an ID column. In the final table the ID is unique, but in this table there are multiple IDs with different update times. I need to select the Max Update time for each ID in the staging table to perform an update on the final table. What is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.