Colleagues,
It's a theoretical question: when a user (actually - the program user's working with), in the multi-user environment, opens a table - is this table opened in the buffered mode by default? If it is buffered mode - what kind of buffereing might be implemented: record level, table level, optimistic, pessimistic, etc.?
(Note: I am not talking about LOCK()/RLOC() functions.)
The origin of the questoin: we have two programs, the prog. #1 (P1) runs the other (P2) and waits until some small table, created by P2, appears in the certain location on disk. And P2 works flawlessly and does create this small Ret.DBF. The problem is that P1 does not always updates its tables with the data in that Ret.DBF. I suspect that, if dBase (Vis. dBase ver. 5.7) opens its tables in the buffered mode, then whatever the method's code does to update this table sits in the memory buffers and never is written into physical file on disk.
Regards,
Ilya
It's a theoretical question: when a user (actually - the program user's working with), in the multi-user environment, opens a table - is this table opened in the buffered mode by default? If it is buffered mode - what kind of buffereing might be implemented: record level, table level, optimistic, pessimistic, etc.?
(Note: I am not talking about LOCK()/RLOC() functions.)
The origin of the questoin: we have two programs, the prog. #1 (P1) runs the other (P2) and waits until some small table, created by P2, appears in the certain location on disk. And P2 works flawlessly and does create this small Ret.DBF. The problem is that P1 does not always updates its tables with the data in that Ret.DBF. I suspect that, if dBase (Vis. dBase ver. 5.7) opens its tables in the buffered mode, then whatever the method's code does to update this table sits in the memory buffers and never is written into physical file on disk.
Regards,
Ilya