Code:
CD (getenv("TEMP"))
Create Table twogblimit.dbf free (c1 C(254), c2 C(254), c3 C(254), c4 c(254), c5 C(7))
? Recsize()
Append Blank
Do While .T.
* double the records by append the dbf dat to itself in each iteration
Append From Dbf("twogblimit")
Doevents Force
EndDo
This results in the error "File ...dbf is too large".
Reccount at that stage then is 2097151, which means the dbf size Header()+Reccount()*Recsize()+1 (the 1 is for the EOF byte) is 567 Bytes below the 2GB limit.
Win Explorer shows 2,147,483,081 bytes, which exactly matches that (2^31-567=2147483081)
So VFP indeed stops before corrupting the DBF. I wouldn't rely on that, though.
The damage is done, even if the DBF is ok, the last data is not saved. Causing all kind of follow up errors.
Tamar has given some pointers on how to address the DBF size and avoid the limit, at least gain some time.
You might SELECT MAX(LEN(ALLTRIM(somecharfield)) FROM yourtable to determine how long the longest trimmed value stored really is and optimize such columns width.
If FPT size is small you may change some char fields to memo, which in general won't involve much changes in code (aside of grind binding needing an editbox instead of textbox), you can make use of 4GB in dbf+fpt file this way.
If your fpt file is hitting the limit you may play with SET BLOCKSIZE when creating a new table to copy over data into, BLOCKSIZE 0 to allocate blocks in the granularity of single bytes, meaning each memo value is stored in a block exactly the length of the value, not padded to the next multiple of 64bytes. But it also means any change of a memo value to a larger size causes memo bloat, the old block is marked unused and the value is stored in a new block with the larger size. In the first moment it'll save space, though. In average 32 byte per memo field, if your memos currently are having the default block sizes of 64 byte, in average the half of each block is unused. Your statistics may vary, if many memos are empty or short.
Bye, Olaf.