Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chriss Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Is it RAID or something else? 2

Status
Not open for further replies.

IRABYY

Programmer
Apr 18, 2002
221
US
Colleagues:

We recently started to experience quite bazzar behavior of a simple INSERT (SQL) command, i.e.
Code:
INSERT INTO (DBF("TABLE_ALIAS")) FROM MEMVAR
If you issue after that
Code:
GO BOTTOM IN TABLE_ALIAS
IF TABLE_ALIAS.FieldN # m.FieldN
   && Some code
ENDIF
the IF condition above is almost always True, and program enters IF...ENDIF code block.

This started when we and our customers switched from WinNT 4 to WinNT 5 (and WinNT 2003) servers with RAID HDD system. Considering that this system has what I call "heavy buffering", I've come to suspect that INSERT does not actually write the contents of mem. vars onto file on disk but just keeps it there, and with the next INSERT (I do it in cycle) the contents of the latest mem.vars gets lost. Therefore, I modified the above part like this:
Code:
INSERT INTO (DBF("TABLE_ALIAS")) FROM MEMVAR
FLUSH
Still, sometimes FLUSH after INSERT and/or REPLACE commands does not do the job. So, out of sheer desperation, I put FLUSH in cycle as well, i.e.
Code:
INSERT INTO (DBF("TABLE_ALIAS")) FROM MEMVAR
FOR I=1 TO 5
   FLUSH
NEXT I
and
Code:
REPLACE TABLE_ALIAS.FieldN WITH m.FieldN IN TABLE_ALIAS
FOR I=1 TO 5
   FLUSH
NEXT I
This worked 99% of the times, but still this remaining 1% of uncertainty bothers me a lot. By Murphy's law, this 1% is bound to happen at the most "juicy" [wink] customer's system.

Is there any way to ensure 100% that the INSERT and/or REPLACE commands are actually writing onto disk, not into memory buffers without FLUSHing buffers 5 times in a row (thus slowing the performance)? OR - are there any other commands or settings that are designed specifically to bypass the memory buffers and write data directly onto HDD?

AHWBGA.



Regards,

Ilya
 
FLUSH only clears the FoxPro buffers to the OS. No matter what you do in FP, you can never "force" the OS to physically write anything to disk. As you suggest in your question, RAID, caching controllers and different OSs (Novell, Unix, Linux, Banyan, OS2, etc.) all control their data and where it exists (memory, external buffers, internal buffers - including temporary disk files, or actually on the intended data store), anyway they feel is "right".

Rick
 
If you can disable disk write caching at OS level , maybe you can solve this.
The option is located in "Policies" tab of disk properties. Uncheck it and see if works.


 
Badukist,
While this may help for files on your local system, it won't help with files on a file server. Also, even locally, this disables the OS caching only - if the hardware includes a caching controller or RAID setup, it still may not be actually written to disk. (Once again, things were much simpler back in the old DOS "all files are local" days!)

Rick
 
To disable disk write caching at OS level can be done only on in-house systems. Just imagine a vendor telling the customer's Sys. Admin to turn disk caching off on his servers or vendor's program won't work... The reply would most likely be of type "Take a hike, buddy! I'll find some other vendor."

Would
Code:
= CURSORSETPROP("BUFFERING", 1, "TABLE_ALIAS")
after
Code:
USE (lcFile) ALIAS TABLE_ALIAS IN 0 EXCLUSIVE
help? Even though it's only FP's buffers we can affect, still it's one buffer layer less to worry about. What do you think, colleagues?


Regards,

Ilya
 
What I don't think consistent is that the buffers shouldn't be a problem: It would be horrible Buffering code if you write then read something and the "thing" reading didn't get the buffered data if the physical data is queued to be written over....

It must be a VFP-to-OS issue, not an OS-to-network-to-fileserver-to-fs_buffer-to-raid-to-raid_buffer-to-raid_disk issue, or else you could NEVER trust that chain to work properly. Whenever reading from the OS, whatever has been written must be what has been written...

I could easily be wrong, but it seems like sloppy buffer writing if you can't count on this.
 
Check this link. A lot about network client and server settings that can affect normal operations in multiuser environment. And is not about DBFs :).

Seems that is OS fault. When I have a FLUSH command, all buffers MUST be flushed. If is not this way, it is not because VFP but because OS (I think).
 
Very interesting! Another great family MS products...
 
Yeah, that's what I was afraid of, folks: "It'a not a bug, it's a feature" type of the problem's cause.

"Opportunistic lock", huh?

Thank you for the link, Badukist! That's not the solution, per se, but at least something to acquit myself - and my data merge program - in the eyes of my boss.

Seems that the problem is that, even though end users exited the data retrieval program, server still lists the data files in use and prevents locking them by USE SomeTable.DBF EXCLUSIVE command. What's really bad that system never tells back "File is in use" in this kind if situation. It appears that we'll have to tell our customers' Admins to disconnect all the users from the master merge directory on server itself before running data merge program.

Any other suggestions, any?


Regards,

Ilya
 
Exactly this thing was happened some times ago, but with an old FOX26 application on a Windows 2000 shared folder. Was because RFCB (File Handles Caching) at server side. This issue can be fixed. Check the same link for EnableOpLockForceClose setting in registry.
The fact is that ALL applications using shared file acces have troubles because these settings. Tables/files/indexes corruption is a common thing because these "features" (at least this is what they say) .
 
Yeah, "It's not a bug, it's a feature" they say on One Microsoft Way ("There can be only one!" That's from another "opera", but exactly describes MS philosophy). [wink]

Thanks for the link, Badukist! You are getting a Star for your efforts. (You're welcome!)


Regards,

Ilya
 
Some times ago I had a lot of corruption issues with an application in a remote office (pretty data intensive). I was searching for all informations that could help me to understand what is happen. My conclusions:
-you can have rock solid server, it is enough one uneducated user think that RESET button is the best solution, this lead to table/indexses corruption (think about MANY uneducated users). You can tell them all day long what is not allowed to do, they will (And then &quot;Why is this, because I've reseted only after I selected <Save>&quot; :)

-we can have rock solid Foxpro (which is not, yet), if it rely on network OS, software, redirector, etc., again will be problems because the bugs/default settings of these components.

The decision was to port the application to a client-server solution. In 5 months not a SINGLE corruption issue.


 
badukist (Programmer): The decision was to port the application to a client-server solution. In 5 months not a SINGLE corruption issue.

Could you, please, elaborate a bit on that? Does it mean you convert your DBFs into MS SQL Server format?

As for the &quot;uneducated&quot; users (euphemism for &quot;illiterate&quot;, I presume?) - been there, done that. That's why my UI design is usually bullet-proof: end user can do no harm to the data using my program while while viewing and querying data all imaginable ways. But - alas! &quot;You can make it fool-proof, but you can't make it idiot-proof&quot; is one of our mottos in R&D Dept. There's always one who will try to stick a floppy in with a sledgehammer... [smile]




Regards,

Ilya
 
Not MS SQL Server (was too expensive), Firebird SQL Server (100% free). It is an Interbase clone.

All problems just dissapeared. I can do online backup, I have real time database shadowing, I can reindex while users are online (never needed),... It is rock solid (I've read that Interbase is used in M1 tanks as database manager for internal computer).

At that time, I was compared MySQL, Postgres and Firebird as a cheap/free solution. Firebird was the winner. (Postgres is more advanced but lacks native windows version, run in emulation mode with cygwin)
 
MYEARWOOD (Programmer) Sep 25, 2003
It is possible that between your INSERT command and the GO BOTTOM, that another user has added a record. This means your INSERT is not the record at the BOTTOM.

Can't happen. No one can add a record during data merge, end users can only view the data, not modify. It looks more like writing goes to the cache while reading is done from disk.

SCATTER-APPEND BLANK-GATHERE scheme is 2-3 times slower than SCATTER-INSERT (we ran the tests here), and people are screaming bloody murder already for slow performance.

You've suggested primary key, but it does not allow null values - how do we append blank? That's besides that we cannot make any field to be primary key because we may have duplicate values in any field in any data table. We are not dealing with one corporate database (less our own), or one customer. We needed a data design that would fit for any customer, and if we went &quot;by the book&quot; we would end up with individual database for each customer, which means all our forces would be spent on maintenance instead of development. Thus, we had to invent something much more flexible than conventional database, for the price of abandoning &quot;theoretically correct&quot; data design.

Thanks for the suggestions, though, I appreciate your efforts and willingness to help.



Regards,

Ilya
 
Perhaps you've already found the solution to this.

But I have also experienced first hand an instance where a table is inserted into, but the indexes were NOT updated!!. Consequently GO BOTTOM would would go to the wrong record, if you had an active index set, as the currently added record is not part of the index collection

I had this problem on VFP6. I found that installing SP5 resolved the problem




WTrueman
...if it works dont mess with it
 
wtrueman (Programmer)
Perhaps you've already found the solution to this.

Not yet.

a table is inserted into, but the indexes were NOT updated!!. Consequently GO BOTTOM would go to the wrong record, if you had an active index set, as the currently added record is not part of the index collection.

I had this problem on VFP6. I found that installing SP5 resolved the problem.

By all appearances, this problem is back in VFP 7.0, even SP1 does not help.
Looks like this is exactly what's happening. I will try to work around this issue. Thanks for the tip, bro!


Regards,

Ilya
 
You'll know if this is happening if you try to seek or locate one of the erroneous records. The bug in VFP6 was only with the INSERT from MEMVAR too!. The only soulution once this happened was the dreaded REINDEX.

Luckily the snippet of code I had to look at wasnt too great, and I found that changing the INSERT to an APPEND BLANK and REPLACE at least resolved the issue. Im not sure from your previous posts whether or not this will work for you.

Good luck...it had me tearing my hair out!



WTrueman
...if it works dont mess with it
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top