Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations John Tel on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Managing table with 14 million records?

Status
Not open for further replies.

Mongr1l

Programmer
Mar 6, 2006
179
US
Hello.

I'm looking for suggestions on how to optimize my SQL SERVER 2000 platform such that it doesn't keep getting hung every time I want to make an elementary modification to a table that has 14 million records.

The box is HP Compact DL380, dual processor hyper threading machine with a separate data drive and a separate log drive for log shipping.


For instance, if I want to add a simple identity field, auto increment, it gets hung.

If I want do a simple field conversion, it gets hung.

If I want to do a simple batch update, it gets hung.


Suggestions?


mongr1l
 
Firstly - a simple add identity field is not so simple.
SQL effectively will copy the table, add the field in and then move the data back in. This is not a simple operation and more than likely it hasnt hung its just taking a long time.

Why do you need to change the structure of the table after its been loaded with 14 million rows?

If you have large tables there are many things you can do to try and address problems. (We have tables with >60million rows and access is in milliseconds).

Split your table into several tables and create a view (this can be effective if only one end of the table is changing such as transactional data based on date e.g. you keep adding transactions by date but all old transactions remain).
Have seperate filegroups/data files - this will help improve IO performance.

Index the table efficiently.


"I'm living so far beyond my income that we may almost be said to be living apart
 
Also do batch updates in batches, never try to do them against the whole table at once. There's an FAQ on how to do this somewhere.

You probably need better hardware to handle tables this size, then you can partition the table over several drives and improve performance. Indexes should also be on separate drives. Consider if you need to archive old data to improve performance.

Never do a modification using Enterprise Manager, it is much slower than using Query analyzer to do the same thing.

Do any required modifications during off hours and if at all possible put the database into single user mode. Other wise you will just be hanging up your users at the same time.

Questions about posting. See faq183-874
 
Thanks!

I'll work with this and see what I can come up with.

mongr1l
 
Ok, here's what I tried:

Code:
[COLOR=blue]DECLARE[/color blue] @I [COLOR=blue]BIGINT[/color blue], @MAX [COLOR=blue]BIGINT[/color blue], @OFFSET [COLOR=blue]BIGINT[/color blue]

[COLOR=blue]SELECT[/color blue] @MAX = [COLOR=blue]MAX(ID) FROM[/color blue] Rate_nanp_factype_daily

[COLOR=blue]SELECT[/color blue] @I = 0


[COLOR=blue]WHILE[/color blue] @I <= @MAX
[COLOR=blue]BEGIN


update[/color blue] t
[COLOR=blue]set[/color blue] t.rc1_vendor = x.[Vendor_id]
[COLOR=blue]from[/color blue]  Rate_nanp_factype_daily t 
[COLOR=blue]inner[/color blue] [COLOR=gray]join[/color gray] ( [COLOR=blue]select[/color blue] s.id, a.[Vendor_id] [COLOR=blue]from[/color blue] 
 ([COLOR=blue]select[/color blue] id, rc1_clli [COLOR=blue]from[/color blue] Rate_nanp_factype_daily [COLOR=blue]where ID[/color blue] [COLOR=gray]BETWEEN[/color gray] @I [COLOR=gray]AND[/color gray] (@I + 500000)) s
[COLOR=blue]inner[/color blue] [COLOR=gray]join[/color gray] ACU_TRUNK_GROUP a [COLOR=blue]on[/color blue] s.rc1_clli = a.[CLLI]
[COLOR=blue]group by[/color blue] s.id, a.[Vendor_id] ) x [COLOR=blue]on[/color blue] t.id = x.id

[COLOR=blue]SELECT[/color blue] @I = @I + 500000

[COLOR=blue]PRINT[/color blue] @I
[COLOR=blue]PRINT[/color blue] [COLOR=red]'NEXT SET'[/color red]


[COLOR=blue]END[/color blue]

Basically, this is my interpretation of SQL Sister's suggestion. I put the update in a cursor that updates 500000 at a time. I also designed the join in such a way to optimize the table scan.... in other words, not have all 14 million records join with the ACU_TRUNK_GROUP table for every loop. This was accomplished via a subquery using inner joins.



It almost completed in 32 minutes... but failed because the 'primary index ran out of space'.

What the hell does that mean?


Should I be shrinking the DB after every iteration?

mongr1l
 
Actually, this is the message returned:

Code:
[COLOR=red]Server: Msg 1105, Level 17, State 2, Line 12 [/color red]
Could not allocate space for object 'Rate_nanp_factype_daily' in database 'Penalty_Cost' because the 'PRIMARY' filegroup is full.


ideas?

mongr1l
 
You have run out of disc space.

"I'm living so far beyond my income that we may almost be said to be living apart
 
Wow... do I feel foolish.

Thanks, hmckillop.

 
Ok, I detached the db, copied the data file over to a much larger disk, and reattached. The above query ran fine in 28 minutes. That problem was fixed...


Now I'm trying to do a simple aggregrate insert from the above source table to a target table, like this:

Code:
INSERT INTO PENALTY_SIDExSIDE(TNPA,TNXX , TOCN, STATE, EOFFICE,CHOICE,CDR_SRF_TERM_DATE,
CDR_SWITCH_ID,TVID , VENDOR, LATA,TANDEM, TFACTYPE, LCR_CLLI1)

SELECT NPA,NXX,OCN,STATE,ENDOFFICE,CHOICE,CALL_DATE,SWITCH_CODE,
VENDOR_ID,RC1_VENDOR,LATA,TANDEM,TERM_FACILITY_TYPE,RC1_CLLI

FROM Rate_nanp_factype_daily

GROUP BY NPA,NXX,OCN,STATE,ENDOFFICE,CHOICE,CALL_DATE,SWITCH_CODE,
VENDOR_ID,RC1_VENDOR,LATA,TANDEM,TERM_FACILITY_TYPE,RC1_CLLI


After an hour, I get the following message:

Code:
[COLOR=red]Server: Msg 9002, Level 17, State 6, Procedure spGenerate_Penalty_Table, Line 20[/color red] 
The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space.
The statement has been terminated.


So now I have a couple questions:
A.) Do I use 'TRUNCATE LOG ON CHECKPOINT' and, if so, how?
B.) Should I be using 'Begin Transaction... Commit Transaction'?



mongr1l
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top