Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

CDX... growing to crash level in 4 hours

Status
Not open for further replies.

RFanslow

Programmer
Jul 16, 2014
24
US
I have had this situation happen twice in the past 5 months
a foxpro table with 23 fields, 3 index fields, (order - char(10), Client - char(10), eDate - (datetime)

61 records, has a CDX of 2.8 gigs in 4 hours...

I dont mean to be short with the description, and i can supply data structure and index structure if you want to see it but it clearly makes no sence why a cdx would grow to 2.8 gig in 4 hours
Anyone have any idea on where to even start looking for this...?

Fanz

 
Just an update on this issue
there is no index on the memo
there is an index on the datetime field, approx 61 updates as it goes thru a process ,
we have eliminated the index/CDX in order to stop the possibility of it killing production.

we will be moving the table to Ms Sql so this will not be an issue in the future

this is a multi user environment, so exclusive is out of the question

Fanz

 
To remove the index is perhaps the best solution to stop the effect, as you don't need an index on 61 records. I don't see why moving it to SQL Server is needed now, that the problem is solved. To move just one table of a database to SQL Server is questionable.

The code bloating the cdx file cannot only be INDEX ON or REINDEX, it could also be updates, that should be clear. But 61 updates of 61 datetime values just once will never make a CDX grow over 2GB, so this has to be done repeatedly.

As defined by the CDX file format ( both interior and exterior node records use 512 bytes and the bytes 24 to 511 contain multiple index keys. But assume for some reason each DBF record will need a node record and only store one key, one datetime of the dbf. Then the tag would take 61x512 Bytes for the whole table and if we assume the worst case each node is never reused and the cdx bloats by always appending new node records, then you need to have 2GB/512, which means 2^22 (about 400 million) updates in 4 hours, that's about 300 updates per second. Not impossible perhaps, but you should know that something is doing very frequent updates and it should show up in process monitor very fast.

Bye, Olaf.
 
You can try creating all three indexes indivisualy as follows

index on field1 to i_field1
index on field2 to i_field2
index on field3 to i_field3

*then

use dbffile && with your paras, must open all indexes as well
set index to i_field1, i_field2, i_field3
set order to 1 && set order as required.

Hopefully, you be able to see which field is causing bloating. Also, verify if the same CDX is not used for another table (by error), or another table is copied to this folder with the same name. Verify CDX file by doing 'seek' for each indexed field and verify the result (match the key with field value) by browsing the found record.





 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top