Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

What can cause Clipper index corruption?

Status
Not open for further replies.

weirdly

Programmer
Mar 13, 2003
1
0
0
CA
We have a legacy Clipper application and lately we have been experiencing index corruption. The network and the application seem stable, but we have found that our users have made links to several tables using MS Access. The standard DBase III or IV driver is used in the link. They run reports and do a daily data download into a set of Access tables. I have been told that there are NO updates to the production data. I guess I'd like a better understanding of what happens when a user links to a DBF using MS Access and if I should have any concerns.
 
Hello Weirdly,

If there genuinely are no updates, then there should be no problems - but with MS Access can you be sure?

Can your people use copies of the data instead? This would alleviate the problem. If they are doing a daily download - then there is no reason why this can't be from a copy is there?

HTH


Regards

Griff
Keep [Smile]ing
 
What indexdriver are you using from Clipper? The default .ntx and .ndx drivers with 5.2e are known to corrupt on large databases, .cdx indexes (foxpro) are much more stable. I have used The Six Driver just to get rid of daily index-rebuilds because of indexcorruption (a few .dbf's each > 100 MB).
Access is macho enough to update the last-update-date in the dbf header, even if it's only 'reading' the database, or 'fixing' the DB3 Y2K issues in .dbf files.

HTH
TonHu
 
I too am having index corruption frequently on large database files. I don't understand why reindexing corrects the index sometimes and sometimes not. The index that I am having frequent problems with is 19 mb and the dbf files it 85 mb. Sometimes when I reindex the size of the ntx files is only 18mb...then reindexing again results in a file of 19mb.
 
fbizzell, perhaps you should consider using CDX instead of NTX. I use COMIX with somewhat smaller sized DBF's (in the 20-40mb range) and haven't any problems with rebuilding indexes.
 
You do not state the clipper version used for this application.
Nor do you state what OS the application runs on.

Since the days of Clipper there have been many OS changes.
The relative large NTX indexes are known to create trouble on larger dbf's and on networks.

If you're not using the 5.2e version or the 5.3b version of clipper, upgrade to that version.
If you're not using the CDX index driver start using it.

Rob.
 
With the re-indexing term in my reply, I have *assumed* (wrongly I guess) that the index is rebuilt from scratch, as in re-created. Not the REINDEX statement inside Clipper, as that is basing the new index partly on corrupt index structures if it's really badly corrupted...

HTH
TonHu
 
I am just taking on 6 clipper programs that access 43 databases from more than ten years ago. I just started using Grep for windows and Boxer to edit. I am using Clipper 5.3 but I don't know where I can get an upgrade to 5.3b yet.
I use exospace and run a novell network. I made a boo boo ten years ago when I decided on 4 digets for a job number and we now have almost 9500 jobs, Y2k all over again.

I ran across this indexing problem today when testing my programs on a stand alone system. Wasted the whole day (should have come here first) but I have a bit of a unique spin on the problem.

The program is not well done as far as opening and closing databases because I open all of them and leave them open for the entire time the user is working. But that said it has been very reliable all these years.

I open 8 database files and 6 index files (ntx). During the testing phase I deleted all the ntx files so they would get rebuilt (index on code if the file is not found) There is no problem indexing files when I open one file at a time index on and then close it. No errors. But if I run the code that indexes them and leaves the files open I get to 5 open files and then get the corrupt error. Here is the twist, I added code to open all files one by one and index them closing each file before opening the next one. Then i go ahead and open them all as they do not have to create indexes at that point (i was suspecting memory limits of some sort) I still get to 5 sometimes 6 and sometimes 7 open files but then error. Now the thing is, it runs just fine with the original NTX files that are 10 years old, they all open with no problems.

I suspect an older version of dbfntx was more stable so I think I will as a down dirty just in case senerio use the old exe program that is compiled with I don't know what version and see if the new compiled exe will run.

The dbf files are not even 2 megabytes I did however try cutting out 90% of the old records and they all opened just fine so the size has something to do with it or perhaps it is bad or corrupt dbf file but like I said they do index fine when individually opened or up to 5 at a time. I looked at perhaps opening and closing the files so i dont have to have more than 5 open but that amount of work would probbly be spent better updating to cdx file indexes.


In the other spare time I don't have, I will figure out how to use CDX files.

This is just a quick stop here as I am pooped from banging my head against the screen all day with this, it's a can of worms that keeps getting messier at each step.

 
Well I dug out my old Blinker 1.0 and tested it. It all runs fine, retested with exospace and error 8002 on index files.

Looking further here is some more poop on error8002

Going to give blinker latest version a try now. If that works good i will just spring for version 7.0. I dont think I can get an upgrade from 1.0, ya think?
 
DBFNTX has a nasty habbit of *sometimes* putting a ^Z (EOF marker) in place of the Deleted flag of a record, and so denying access across that point, _except_ if you use GOTO BOTTOM, to directly go to the EOF, and skip backward through the file to recover all records, probably except the one afected record. On back-skipping it seems to not check for an EOF marker inside the record/file.
This 'habbit' occurs more frequent if filesize grows, and probably has to do with local buffering, and all the fault that can be introduces with that.

HTH
TonHu
 
Some suggestions
Upgrade to clipper 5.2e.
Use blinker 6 or higher.
Never use the Reindex command. Always rebuild the index from scratch
Check the Opportunistic locking settings on the server. Use a product like Advantage Database Server where files and indexes are handled by the server rather than the client.
 
You may want to increase the files= line in your config.sys or config.nt file. If memory serves me correctly the default open limit is 20.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top