Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chris Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Table structure corrupted 2

Status
Not open for further replies.

florindaniel

Programmer
Dec 4, 2009
120
RO
Hello,

I'm having this problem when running in network environnement, every now and then,
especially when having hardware or power failure, some of my tables opened at the time
(or their CDX-es) become corrupt and hence no further proper function.

I usually open a table (say "invoices"), apply some filter, set an index (the table has maximum 4 index Tags in the CDX)
in the data environnement of the form. I try keeping at a minimum the number of the tables permanently open and still
have this problem. The tables have less than 100000 records...

Any ideas, advices, etc..?
I use VFP9.0 Sedna
Thank you
Daniel
 
Hi Daniel,

I think you've answered your own question. You say this happens "especially when having hardware or power failure". That is surely the most likely cause of the problem.

You need to investigate the reasons for the hardware and power failures, and eliminate them.

In any case, the number of open tables or the size of the tables shouldn't cause any problem.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Thank you Mike,

I've noticed indeed the link between those hardware problems and mine's but I didn't expect
to experience structure corruption. Data loss maybe but not file structure corruption.

Do you think it would be a better idea to close and open the DBF each time I insert/modify/delete
a record instead of having it open all the time?
 
Do you think it would be a better idea to close and open the DBF each time I insert/modify/delete
a record instead of having it open all the time?

No. I think it would be better to find the root cause of the hardware and power failures.

Regardless of whether these are directly responsible for the table corruption, you don't want them to be causing your users' computers to to crash.

Mike


__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Thank you again Mike
The root cause of the power failure is because there's no power [smile]
I must be having the most .... inexperienced users -let's call it like so-
but I've met: network cables cut because lack of documentation, switches powered off
because there was no free plug for the coffee machine, power failure caused by short-circuits,
and so on...
I am aware the best solution is to prevent those causes but I'm looking for a way (if there is one) to
minimize the damage.
Today it was an burnt switch that broke a link to a station and then all the other stations worked bad or not worked at all [sad]
(yes I know... SQL is a solution)

Thank you
 
I didn't expect to experience structure corruption. Data loss maybe but not file structure corruption.

Just to be a pedant, structure IS data. It isn't something that is special and protected. If you are prone to data corruption then ALL data (including structure) is vulnerable.

I agree with Mike. The only prevention is to reduce the things that cause the corruption.
 
Among the hardware solutions here is ensure that every computer is plugged into a UPS (uninterruptible power supply). Next, of course, is user education.

Tamar
 
What is most likely happening is that users have appended records, but the header never got updated due to the connection loss. This is a design issue with independent tables shared across computers and networks. This is avoided in SQL servers since a server is generally on a UPS to protect against power loss and they handle all the actual appending internally, the users just send requests to add, edit, delete or just read.

Solutions that come to mind include making frequent backups and asking users not to walk away while in the midst of appending data, etc. I recall years ago one company actually adding extra records overnight and then during the day as the users needed to append, the code would just select for use a record from among the new records added the night before. Yes, a very clunky solution almost no one would ever want to use but it avoided most corruption issues.
 
It's mostly only powr outages and network fails during writing, that cause table corruption.
And data-loss vs structural corruption is closely coupled, the structure of a dbf is mostly data only anyway.

The most frequent error in DBF is wrong reccount in the DBF header. There is an easy fix for that:

Code:
SET TABLEVALIDATE TO 0
USE Table
APPEND BLANK
DELETE
* PACK && optionally to get rid of the blank record.
USE

There is really nothing for CDXes, you better DELETE TAG ALL and redefine. Therefore you need info on the structure of tables and especially indexes. That's because the data section of index files are organised in a tree structure, not as simple as the records of the dbf, which merely are a simple list of recsize() blocks comparing to c/c++ structs stored to a file, pure data.

One thing mostly unknown or overlooked: If you think the easiest way to update data is to use dbfs unbuffered and think there are only few updates, you fail to see how often each single tabbing from one control to the next causes a write of a single field, perhaps even if the value did not change. The controlsource and the valid event is an automatism you have to know. And if you say: But I don't lock, so there is no reason for a conflict. The truth is there is no shared write access, so there are automatic locks at least, always. And many more without buffering.

Yes, there is SQL, especially SQL Server backends. Their main advantage is one central service instance accessing the data file(s), not a swarm of clients, any operation arrives as a request and the one service instance processes them one after another, also in parallel, but with the overview over requests pending and how to avoid resource conflicts.

Buffering your workareas and using transactions stabalize file health.
Note1: Buffers are not part of the DBF files, but of workareas, eg if you USE some.dbf AGAIN you have two buffers for the same DBF. And also of course in different data sessions and at different clients. The downside of that is there is no central current state of a DBF or a record in it.

Note2: Transactions add further locking of the files for access of groups of tables, eg a child/parent hierarchy, but add to your responsibility for checking success, solving conflicts, not resource or lock conflicts, rather inconsistent writes, eg a delete and an update, or two updates to value1 and value2. Without buffering and transactions these could even happen undetected, with some random winner. In that respect often unbuffered non transactional write work quite good, as long as there is low traffic to the DBFs

Bye, Olaf.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top