Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Bad record in database

Status
Not open for further replies.

emucode

Programmer
Jul 29, 2002
13
US
I have a database with 555000 records in it, when I try to append the whole file to another copy it stops at record number 449000. If I use edit and go to record number 449000 I can see the record but page down gives me an end of file. If I edit 449002 I can see the record. It seems that the record 449001 is bad or has a end of file marker in it. How do I get rid of the record to clean up the databse.
 
In order to be able to run this utility you'll need to install visual foxpro runtime files.
You can download those there:
ftp://ftp.prolib.de/public

Choose the VFP9 version.

Good luck.
Rob.
 
Or,

- open the database without any indexes
- copy stru to a new database
- copy 1st 449000 recs to new database
- goto 449002
- copy remaining recs to new database until max recno
- close databases
- rename old database to a backup name
- rename new database to current database name
- rebuild indexes.

This procedure has always worked for me, assuming there is only one bad record. Now you only have to worry about how the bad data got there.

Jock
 
Yes, listen I agree with Jock 100% it has always worked for me too; or open file in dbu, go to bad record, double delete it.
 
I'm using dbaseIII and clipper summer 87 - the database has been in use for 10 years - it's a history file with 594500 records in it. I've copyied the first 494000 records to a temp database and then skipped past the bad record but I'm unsure as to the command to copy the remaining records.
 
Hi, emucode

Suggest copying the remaining records into a second temp datqbase.

Then open the first temp database and append from the second, rename the files, build the indexes and God bless.

i.e. set exclusive on
use baddbase
go top
copy next 494000 to temp1
goto 494002
copy next 999999 to temp2
use temp1
append from temp2
use
rename baddbase.dbf to baddbase.bak
rename temp1.dbf to goodbase.dbf
use goodbase
index on whatever to indexname
etc.

Jock

"For a complete list of the ways technology has failed to improve the quality of life, please press three" - Alice Kahn.
 
Ah, the Clipper NTX bug again, it has bitten me quite a few times, way back in Clipper S'87 and 5.0 times. Switching to Clipper 5.2e & DBFCDX RDD 'fixes' this problem (and some others as well).

To bring back to memory the real bug: (I've written about tis before)
When writing some record, preferably in a larger file, a stray 0x1A (ctrl-Z = EOF in DOS terms) is inserted in the place of the Deleted() space in some record, near the reacord just written.

The solution presented above is a good way to recover the data.

HTH
TonHu
 
Thanks for all you help, I found a hex editor and checked the database and found 7 eof markers. fixed them and it seems to work well. I guess I need to start another post but I seem to have a problem with this database the index file corrupts. any sugestions?
 
Hi, emucode

Assuming you are using NTX's, TonHu's comments are very appropriate. It is entirely possible the EOF characters are not the only garbage data in the database, and I have found the combination of large database size and bad data often causes .NTX files to corrupt, especially when new records are added with indexes open.

However, with clipper S87 your choices are limited. You might try:

- Running in a DOS machine or a DOS box under windows with maximum base memory 620K or more
- if the workstation and/or the server are NT class windows be sure to disable opportunistic locking (OPLOCKS)
- Disable expanded memory (set clipper=f50;e0)
- Write a program which passed the .dbf and scans all fields used in indexes for valid data (no high-order ascii characters, numerics & dates valid).
- write a little program which does nothing but build the indexes. This minimizes code size and maximize dos memory.
- Use Blinker as your linker
- use INDEX ON, not REINDEX

That should enable you to build clean indexes. I maintain an old S87 system which has a cust history file of over 2 million records (around 1 gig) with 3 indexes this way. But it means that any program which adds many records to the file may have to do so with indexes disabled and rebuild afterwards, using the special purpose index program.

Jock
 
Jock
The application is running on a win 2003 server, with a cytrix server. The appplication is used by three offices in three different cities. The main office has the server with about 12 people on it. The other offices use the application thru cytrix. The database in question is the history file which is writen to everytime a document is printed etc. It is compilied in blinker 7.0 and the index file uses set index on when it is generated. The history file seems to be the only file that is consistanly corrupted. All the other files seem to work great with no EOF markers and the indexes seem to work great. Could I have a coding problem. I'm using the clipper locking rlock() and unlock. You mentioned opportunistic locking (oplock)what is it and where do I disable it at. The program has been running for 10 years with constant revisions . Thanks for all you help
 
Hi emucode!

Try copying database that goes bad into a clean new one; basically the procedure that Jock suggested before will do (copy stru, copy next 99999, rename). You should start your application so, that it chechs for missing indexes and build them before it starts. Indexes are a problem in S87, that's why my applications builds any missing indexes before starting main user interface.
 
Let me throw another wrinkle into the mix. The network this application runs on has two outside offices conected to the server using Cytrix. Sometimes they lose the Cytrix connection and have to reconnect. Sometimes they get the screen they were on and other times it's the main menu screen. I have not used the commit command after each time I append a record or modify a record. Could this be causing the ntx files to corrupt? The problem seems to be in the history file but that file is appended to 1000's of times a day and is the most used file. I have used the commit comand on some applications but it gave me problems(it's been a long time and I forgot what the problem is)so I havn't used it. Also if a person is working in the application and the server is backing up the files to tape what effect would that have on the databases?
 
Hi, emucode

Re opportunistic locking, check out the following:

thread 288-555754 (esp griffmg's contribution)
thread 288-359016
thread 288-500375

A search on this site or in google for "NT Opportunistic Locking" will get you a ton of hits.

Connecting via Citrix should not be a problem because the Citrix server is in fact local and the remote users are not sending database data back and forth, only keystrokes and video refresh info.

However, given your situation with this being a critical and volatile database, if the opportunistic locking fix doesn't previde immediate relief I would strongly recommend going to a database server in the short term and start thinking about migrating the app into a more current environment (xHarbour?). However I am only too aware how difficult that can be with some old S87 programs, especially if they use a lot of memory variables.

Also consider rebuilding your indexes in the off-hours using a standalone reindexing module as previously suggested to keep corruption down.

Backup software will not normally be a problem if it is well behaved. If it is set to a shared read mode and doesn't try to set archive bits it should be OK, but if it grabs exclusive use of files it is backing up then it may cause problems. Try eliminating the indexes from the list of files to back up (you can always rebuild them).

Jock
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top