Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chris Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Networking Question, when are changes updated without Buffering? 2

Status
Not open for further replies.

ManniB

Programmer
Nov 9, 2020
135
DE
Hello,

please consider the following basic networking question:

Assuming buffering is disabled and there are only free tables. When both User1 and User2 have opened the same table simultaneously and User1 makes changes like adding records and then issues FLUSH, when will User2 start do see the changes? Does VFP fetch Data for User2 periodically? Does User2 have to close the table and open it again to see any changes? Does User2 see the changes after issuing a FLUSH command? Could a FLUSH of User2 remove records which User1 has added in the meantime?

Thank you in advance!
Manni

 
Manni,

It depends on what User 2 is actually doing at the time that User 1 makes the changes. If User 2 has a form open, and that form is showing the data that User 1 has changed, then the only way that User 2 will see the edited data is to refresh the form. They could do that by issuing THISFORM.Refresh, or by moving the record pointer off the record and on again. They certainly won't have to close and re-open the table.

If, on the other hand, User 2 is viewing a Browse window, then that window (including any susbisdiary memo field windows) will be refreshed automatically, the timing of the refresh being determined by SET REFRESH.

Note that the FLUSH command is not really relevant here. It doesn't affect what I said above, and in any case there is no normally no need to use it.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Thank you Mike!

So basically when the table is opened and it's also the a recordsource of a grid, when the grid is refreshed, VFP will fetch data and also physically send new data entered?

What happens when a table is only opened programmatically but not a controlsource of a grid or control, and the table is then manipulated (programmatically in the background) by two users at the same time ? When will User1 see the changes User2 made and vice versa? Does FLUSH not help here?

Thanks,
Manni

 
Manni,

What you said about a grid is basically correct. VFP will fetch data when the grid is refreshed. And also when it receives focus. If no buffering is in force, or if row buffering is in force, the grid will send any edited data as soon as the record pointer moves off the current record.

As far as the table being opened programmatically is concerned, if no buffering is in force, you can assume that, when a user moves the pointer to a different record, that record will always contain the most up-to-date data.

If a user is editing the data, and if that user has row buffering in force, the edited data will be saved - and therefore visible to other users - when the editing user issues TABLEUPDATE() or when they move the record pointer off the edited record. If table buffering is in force, the editing user will have to issue TABLEUDPATE() or close the table in order for the data to be saved and visible to other users.

None of the above requires FLUSH to be issued. In typical data entry and editing applications, you never need to use FLUSH.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Just another word on FLUSH ....

Basically, you should forget about it. It is a hang-over form the old days of dBASE and FoxBase when table and index corruption was common. It is rarely needed these days. In fact, I can't remember the last time I used it.

At least, that's my opinion. If anyone else in the forum has a good reason for using FLUSH, it would be interesting to hear about it.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Keep in mind one very basic concept: A workarea is a handle on only one current record of a DBF. Especially as we talk of unbuffered workaeas, the whole bubble of multiple records in a buffer does not exist. On the low level technical level you have a file handle and a file pointer to the first byte of the current record. Whatever changes in the file will only affect you when you move the file pointer. And even the byte at the pointer or within the recordsize bytes from it will only affect that when you read them again.

So if you open a table you position on record 1 of that table in the workarea you open the table. And even in a grid you don't have multiple record you act on, the grid has one textbox per column, and that's for the current row, the other rows are written out when the grid is first rendered and are just that, drawings of the controls they only really become when you make that row active. Even the sparse setting of a column does not mean you get multiple comboboxes or whatever other currentcontrol you use for that column, it just keeps an image off that specific control in the non-active rows of the grid instead of drawing the content of the default text1 box of that cell.

HAving that in mind you could only hope to see the active row of a grid to show the current DBF row, but even that doesn't do it.

If you use the data session Window of the IDE and double click on a workarea, to see what's in it a browse window (actually a form with a grid) shows you as many records as it can. As a browse can show something like 20 records, that can be all records of a table. Seeing is believing and so I guess your idea of how VFP workareas work is that VFP fetches all DBF data when you open a table. No, it doesn't, if the DBF has very many records, more than the browse can show, the browse only loads as much records as it needs to fill visually. The length of the scroll bar is computed by reccount, that's read from the DBF header, VFP does not read more. You can verify all that by using a network monitor tool that logs network activities of a client.

Mike Lewis already addressed the specialty of BROWSE in conjunction with SET REFRESH. That's specific to BROWSE, not to the grid that's also part of a browse. But also a grid in itself only fetches as many records as it shows and scrolling means loading further data. If you scroll through many records this might mean skipping gaps. If you scroll from top to bottom very fast that's not meaning VFP loads all records, if you do it on one step you could fetch the first N and last N records of a DBF by using the grid that way.

The grid only ever fetches data when you set focus to a record or scroll, if records in view are changed nothing happens, not even when you SET REFRESH to a seconds-based option, that only applies to BROWSE.

To summarize: The rule of thumb is VFP is as lazy in loading data as it can and goes into sleep mode until you act on controls.

That also applies to simpler controls t than the grid. If a textbox is bound to a field and shows its data of some record number. Nothing happens actively from the textbox side nor from the DBF file server side to keep that up to date, but when you change the field of that specific record in another VFP session or another user on a different client does that. When that change goes to the DBF file and you click on the textbox still showing the old field data, at that moment it first refreshes and you see the new value, but just waiting for that to happen automatic you don't see changes made.

From that perspective, you can't start editing the old value that's not in the DBF anymore, that also does not happen when you buffer, by the way. Your buffer already has to have a changed not yet saved value to not get refreshed from the DBF, the buffer has priority. But you have the CURVAL() function to actively fetch the value stored in the DBF. Only working in buffered mode, though. Well, because in unbuffered mode that's happening automatically.

Sounds good, sounds like not buffering has advantages. You can't just sit there assuming what you see gets automatically updated with changes made by others, but the moment you yourself become active you do see the current data.

[highlight #FCE94F]There's always a mini buffering effect though, that you can't turn off. What you write into the control is only stored into the controlsource (it also came from) until you leave that control and cause the VALID event and the VALID event does not return .F. to prevent that controlsource storage to happen. That is default, if you don't program anything into VALID. Still, that event has to happen and is only happening correlated to leaving the control.[/highlight] One very characteristic effect of that is even a TABLEUPDATE() of all buffered records with force option may save everything except the value in the cell you're currently with the text cursor in focus, because that hasn't gone through the valid event yet. A solution to that is to cause the valid event elegantly by using setfocus on that very cell. That visually means no change, you don't leave the cell, but you go through the valid, lostfocus, gotfocus cycle of events and that does put that cells value into the buffer and TABLEUPDATE then commits it from the buffer to the DBF.

By the way, leaving focus in n the grid that does not mean you have to leaving the grid and change focus to some outside textbox or other control, as the grid has textboxes per columns and they have their valid event each. But in record buffering the moment of changing the DBF file is leaving the record, not just changing column in the same row, of course, and in tablebuffering you have to actively use TABLEUPDATE to store buffered changes and can choose to emulate row buffering by only updating one row, with the advantage of being able to do so even if you don't leave the row.

What follows from this is, that new records or any other changes even to the data in view have no direct effect on a form, not even about the position and size of the slider of a scrollbar of a grid. There is no VFP server component, the only server role there is, is that of a file server, and that does what it does about keeping track of who has file handles, but it does not broadcast changes, so also the file server/OS/file system side outside of VFP does nothing to update the clients. Even less so with opportunistic locking, but that's another topic of its own.

The mechanism that's our hero is that of just in time updating when you use controls. In unbuffered modes you don't have to think about the buffer, obviously. The bad news, although that's not a frequent or hard concern is, that this just in time updating can be effected by network glitches, you can fetch some cached value and that's not only a possible VFP cache error of hitting a cached value that should cause cache miss and fetching from the server DBF, instead. That also involves any software and hardware caching aside of VFPs. One important thing is to have write through, so commits don't get lazily saved to a DBF only. Hardware write caching and delays to the DBF files are causing other clients to not see what VFP considers saved from the perspective of the client that caused that saving. That includes the problem that this client got a success feedback from the outside world, OS and hardware, that the save is done, while it's still just on the way through some write caching mechanisms.

By the way, as a rule of thumb of using buffering you'll do so to minimize all that even further, as buffered values have priority over DBF stored values, once you modify something you don't even see the same record and field value if it changes in the DBF from other clients/users/VFP sessions. Only starting to edit, when the record and field is not yet changed and thus not yet buffered, you also see the latest value before you start editing it. Aside of course all the problems in hardware and OS that hinder it.

There's a reason to combine buffering with locking, as that can work by locking (even automatically locking) before starting to edit. So other users can only read the still not updated DBF value, but will not be able to also start editing, because they get no lock. In optimistic mode you allow that as you don't lock. And there are pros and cons to both approaches.

The way locks are communicated to other clients still isn't a broadcasting effect or a peer to peer communication programmed into VFP, it's making use of features of the NTFS file system each client still reads from the server file actively, there's no action from a server-side whatsoever about this, too. In conjunction with the cache problems of OS and hardware that also means locks may not be seen by other clients. Because (oversimplified) they are also part of the files, just like the data. And when a client can't see current data that's still hanging in some cache and didn't yet arrive in the file, the same applies to locks.

That's what makes VFP so vulnerable against changes like MS introduced for improving network performance with tricks that use caching. Even before opportunistic locking was so much in discussion about breaking VFPs mechanisms, I had problems with automatic and manual locks, too. Overall you can say that VFPs implementation of DBFs from the network and file system perspective is vulnerable against network protocol and file system changes. And that happened even when VFP was in its major lifecycle and support phase.

You can turn your back against DBF usage for that reason and use DB Servers. Just notice that means you're even further from having a view of the current server data, all SQL mechanisms mean you read some snapshot of data and work on that copy, committing back changes.

There is one paradise of DBF usage, as that compares very much to single user mode, that's using a terminal server where multiple clients use the same DBFs on the same server on a local drive from the perspective of the server and the vfp processes running on it per user session. It just has CAL (client access licenses) costs and allowing users to have terminal server sessions means needing more resources on such a server than just the file server role needs. but this works so good, as it moves the network aspect of communicating the session graphics to remote users, not all the data traffic. And local file system shared use of DBFs removes many of the issues you have with networked DBFs.

Chriss
 
Thank you for the excellent answers, Chriss and Mike, and I'm very sorry for my late reply, unfortunately, I have been very busy during the last couple of days.

So basically, with no buffering in place, when the record pointer moves, the current record will be updated in the database and likewise inside a control, as soon as the Valid event triggers or a form is refreshed. That's very good news so when working with tables directly and no buffering in place, VFP is fetching and commiting data very timely and you don't have to worry about using FLUSH, closing the tables etc. to commit the changes.

The reason I'm asking is, I'm trying to avoid networking problems inside a multiuser application where no buffering is in use or needed at this point.

Could you please explain to me, what could possibly go wrong in a situation where mutliple users (less then 10) use the same table at the same time on a grid or when values in the same table are appended, updated or deleted simultaneously in the background? I'm talking about all the operations which can be done in shared mode, not changing the structure of tables etc.

So far, I've noticed when both users try to add records at the same time it could lead to one user not succeeding, because he can't get the lock. From my obervation setting SET REPROCESS to a few seconds can help, some have suggested creating your own function to take care of locking in case an APPEND BLANK/REPLACE/INSERT INTO/UPDATE fails.

What other possible pitfalls are there in this situation and how would you suggest to avoid them, when datawise an opptimistic approach is desired but to be implemented withoud buffering - only free tables opened in shared mode?

For example, do you check if the lock is successful after every APPEND BLANK?

I would very much appreciate it, if you could shed some light on this issue.

Thanks,
Manni

 
ManniB,

First point: There is a difference between buffering and record locking. As Chris mentioned, you would normally specify locking when you set the buffer mode, but the two concepts are quite different.

You also need to understand the difference between optimistic and pessimistic locking. This is a complicated subject, and rather than go into all the details just now, I will describe what I usually do in your situation.

In all my forms where there is any kind of table updating, I specify pessimistic locking. To do so, I set buffering to either 2 (pessimistic row) or 4 (pessimistic table). (For both settings, SET MULTILOCKS must be ON.) This means that, if User A is in the process of editing a record and User B then tries to edit the same record, an error (Error 109) will be generated for User B.

In my global error routine, I check the error number. If it is 109, I issue a message advising the user (this is User B, remember) that someone else is editing the data, so please wait a moment and try again. I then come out of the error routine via RETRY, which means User B will still be sitting in the same form, ready to resume editing. It's important for the form to have a Cancel or Revert option so that User B can decide to go away and do something else while waiting.

This isn't the only approach, but it a fairly simple one, and one that works well for me in most of my projects.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
First point: Yes, you're right FLUSH is optional, it also does not only care about data caching. I'd go as far as saying if you need it you have more problems and it won't be a full solution. A good network that does without write caching is bare minimum for good DBF usage, and write caching done outside of your influence is a concern.

I actually can't tell you fro experience what happens without buffering, as I learned about it very early and never do without since then, but from what I can tell about how it works:

1. There always are automatic locks when you write something, that includes appends and inserts, as they write to the dbf file. Those automatic locks are not permanent locks on any new record that stay until you finally write it away, they are short locks done on the whole file and short lived until that new record (INSERT( or blank record (APPEND BLANK( exists. Then you go into shared edit mode, so to say. This is documented ni the help topic of

2. INSERT and APPENDS are the least problematic, unless you have a stored proc to determine the primary ke value, autoinc is covered by VFP and works nicely concurrently and using GUID as the other commonly used option you don't need to fear collisions. The creation of a record itself, taken aside the primary key value, is very stable, though, because you always will have your record pointer at that new record. Concurrently done that means another user/session will add another record and have their pointer there, you don't cause the record pointer of the other session to move, ie the record pointer isn't anchored to the last record, it's simply at that new record number and stays there unless you do something else, like another appned or GO , etc.

I'd summarize that new records are the least problematic unless you generate the PK with default/stored procs, remember any code runs at the client, there is no VFP server process, so that's not centralized and can have concurrency problems

3. The point concurrency problems leads me to the remaining point of editing shared. Since you now learned (if you didn't knew yet) the role of valid, you know not every interactive or programmaticchange happening to a control value is directly forwarded to the DBF file. But code doing REPLACE or UPDATE does indeed act on the DBF and not the control.value.
The only problematic concurrency is while writing, and since that involves the automatic locking, even without pessimistic row or table buffering, even without using RLOCK or FLOCK, it has a larger time window to mean concurrency. SET REPROCESS is good to know, indeed, so you don't immediately fail whenever VFP encounters the problem of not being able to lock at any point writing.

On top of the writing to a dbf file being as complex as it is needing at least an automatic (file!) lock, it also updates cdx inde tags which again has much involved, so there isn't just one network packet to the server with the tasks to update some file portion, there's a lot of back and forth communication which ideally happens without overlap between two clients wanting to change the same data, even just in the same dbf, not only in the same records.

All that said you also have these problems with TABLEUPDATE, so while using buffering, but the advantage is that you ae forced to do the updating with TABLEUPDATE and thus have a defined point of failure and can react to it, while any problem you have without buffering happen anytime anywhere and only trigger the general ON ERROR handler. And notice TABLEUPDATE has the general success result value of ,T, or ,F,m it does not raise an error, if something goes wrong, so you can handle the problems locally.

I know you ask, because you'll need to introduce TABLEUPDATE in a lot of places to introduce buffering. I'd recommend it, it's worth it. If you still refrain because you only have 20 users and it already works this way for a long time I'll not force it upon you.

To round up what I think is what you get is problems arise too scarcely and you get away without locking, but one thing is for sure, regarding the update of a dbf you have a "last change wins" situation without any chance to even detect a problem that I'd call an extended concurrency problem. Let's rewind to what I said about starting to edit: You activate a control and VFP does a last refresh of the value with the DBF data, so you edit the most recent value and not what might sit there for a few minutes already and meanwhile changed by other users. But that's also just happening once in gotfocus. If you start editing and then have a pause, like a phone call or some other intervention, you don't get updated from changes happening in the meantime, you then may end editing and also have no error in saving, but you never know your write overwrote someone else's change. So it compares to using buffering with TABLEUPDATE using the force option.

So that's how I would summarize working without buffering overall: It compares mostly to optimistic row buffering with tableupdates using the force option to ignore conflicts. With the one downside that in the event of write problems only your general error handler is triggered, you don't have a chance of simpler local error handling after TABLEUPDATE.

And now overall it becomes a consideration about that low risk in a 10 user system.

I can only add in that I witnessed the effect of opportunistic locking before I even know it was called that. As there were errors we tried to observe the problem live, we did a change on one client and tried to read that from another client and failed, even restarting the form. So the actual change didn't happen to the DBF file, only the client that did the change also saw it, which as I learned later is exactly describing what opportunistic locking does, it moves the responsibility to maintain the dbf file to the client that has the opportunistic lock, which means all changes are cached there. The promise to not be aa problem to other clients is, that any access to the server dbf file, even just a read access would make the server reach out to the client that has the opportunisic lock and ask for the cached data to actually write it to the DBF and let that changed data be read by other clients, also called to automatic breaking of the opportunistic lock. But that obviously can fail.

And there have been introduced more things than opportunistic locking which also harden to keep going with DBFs. Offline files , for example.



Chriss
 
@Mike
Thank you for the explanation, indeed you are right that my question overall is more related to locking than buffering. I will simulate the situation in a network and experiment with your approach of using pessimistic locking.

@Chriss
Thank you for clarifying, that FLUSH is basically obsolete nowadays.

So if I understand it right from your explanation, Chriss, in terms of buffering I shouldn't be worried about a situation where different users are adding records (APPEND BLANK / INSERT) at the same time, because each record will receive it's own recordnumber and VFP handles concurrency well here. But in terms of locking it could lead to probems, for example when a REPLACE ALL prevents an APPEND BLANK or INSERT to receive a lock, and you have to take care of this, right? I thought SET REPROCESS will help, but in some situation it failed on me. Let give tell you an example:

I know this is bad design, but just to get to the bottom of problem and understand its underlying mechanisms:

I once had form with a grid and a free table as a recordsource for the grid. Among other fields the table had one logical field which was used as a checkbox on the grid select certain records - let's call it field_1. When running the form, in the load/init event of the form there was a command "REPLACE ALL field_1 with .F.", so no record is selected when the form is first opened. When user1 opened the form, scrolled in the grid etc. and then user2 opened the form while the form was still open at user1's PC, VFP would always freeze. Even after setting "SET REPROCESS to 5 SECONDS" (the REPLACE ALL would take less than that), VFP was still freezing. The same happened after changing REPLACE ALL to UPDATE mytable.field_1 WITH .F. How could this be explained in terms of locking?

Thanks,
Manni






 
REPLACE ALL or UPDATE without a limiting WHERE clause both are very intensive file operations, and remember there is no VFP Server component, the file is changed from the client, so it's operated on by loading it through the network, changing blocks of it and writing them back. Even just changing a single byte per record, as a logical field is, the block oriented file operations might need to read and write the whole file.

Such things like a selection flag surely are better not in the central table. But I guess you know by now.

Chriss
 
Thank you Chriss, I always try to avoid commands like REPLACE ALL and UPDATE without a WHERE clause now for network performance and the mentioned locking problems.

Is it possible to use UPDATE with a WHERE clause to update many records while another user edits one record where the WHERE condition is not .T. ? Or will UPDATE try to scan every record and if it finds a record which is edited at the moment by another user it will cause a problem? In other words: Is it safer to always use a SCAN...ENDSCAN loop with an IF-clause for replacing many records or is UPDATE...WHERE just the same in regard to locking/networking?

Thanks,
Manni

 
How writes introduce automatic locks is explained here:

I'd highlight this paragraph:
help said:
In addition to record or table locking, you can also choose automatic or manual locking. Many Visual FoxPro commands automatically attempt to lock a record or a table before the command is executed. If the record or table is successfully locked, the command is executed and the lock is released.
The second sentence somewhat invalidates the first one, especially the "choice" you have. Indeed what an UPDATE does is what it does and you have no choice there.

So your question is good, whether single replaces can be less restrictive to the shared usage than one update-sql.

Well, see the table. REPLACE differs in what exactly you do, but single record replaces only lock single records, while UPDATE-SQL is listed as doing a table lock. In all simplicity that means during an UPDATE-SQL one client does, no other updates can be done, also not an that only need a single record lock.

But avoiding UPDATE does not make things less restrictive. A single UPDATE can be done faster than many replaces, as just like SQL-SELECT Rushmore optimization improves finding the records to update and then it's easy to visit them all.

Also, the table or record lock aspect that compares to FLOCK and RLOCK isn't everything. There is that concept of Windows file systems that means there can only be one file handle with write access. No matter how small or big your write operation is. And in that sense there also is no parallelism possible only using REPLACEs. Only one process ofone client can have a write access handle on a file. That compares to VFPs FOPEN(cFilename,nAttribute) with any nAttribute value other than 0 or 10, which can be shared as it's only reading.

So also VFPs shared mode always is interrupted to be temporary exclusive when you write, no matter what command or function you use for it.

And on top of that, once you use buffering, it won't affect you, whether VFPs runtime could get write access to the file, because you act on the buffer only. The only moment you act on the DBF (or remote backend) is with TABLEUPDDATE. Or, when using record buffering only, everytime you switch record you cause an implicit TABLEUPDATE() without actively calling the function.

With buffering you don't only control when you do the update operation that needs write access. And you can have bad luck too, but as I said before:
myself said:
have a defined point of failure and can react to it, while any problem you have without buffering happen anytime anywhere and only trigger the general ON ERROR handler

That's the big advantage of buffering. It's not a VIP ticket to write access. But by doing this when you decide and by getting a response to trying that's not a raised error, you have it easier in the end, also because you reduce the network traffic that's necessary for all the single upates that each first need to get the write handle, then the table or record locks or both, then write, then unlock, then close the write handle. All this is a lot of back-and-forth network communication aside of just the data transport.

Also, forget about locks, in my experience what weighs more is the need to get write access in the first place. TABLEUPDAT of all rows that are buffered only does this first step once and keeps this write handle for all it's operations, and it does INSERTS, UPDATES, and DELETEs, all of the changes, instead of you doing this all in single commands.

If you really want to be less invasive to all other user sessions by doing a SCAN loop with REPLACES in all records than an UPDATE, you should really experiment. I'd predict this only is less inhibitive when you really slow down your process and always give a chance to other clients to process events. It's surely not better to get write access, then locks, then write, then unlock and then revert to read access, if you do it in so close succession that the short moments between reverting to read access and getting write access for the next REPLACE don't really give other clients a chance to do any write operations.

Take into account that all these state changes need to propagate and often changes in status more likely lead to misinformation. You get very little net effect with a REPLACE for all the network operations that are needed by the runtime to communicate about requesting something, getting it confirmed in a response and then realizing it can go to the next phase.

Chriss
 
Thank you again, Chriss, I've never seen it from the perspective, that buffering also reduces network traffic and thus also avoids more conflicts when different users try to write to the same file. That's seems like another huge advantage! Without buffering I noticed, when a user writes inside a textbox, then the record is locked the whole time until the valid event triggers. With buffering the record would only be locked the moment the update is commited. This alone would minimize the amount of conflicts in my opinion.

Reading the VFP help again, it kind of surprised me that UPDATE-SQL really locks the whole table regardless of the WHERE clause, while a DELETE-SQL only locks the record.

Another example that worries me about REPLACE vs UPDATE:

Imagine there is an email program written in VFP. Each email is a record in the table emails. The user can also write drafts (emails to send later) and they are stored in the email table as well.
When the user opens his drafts box, all drafts of the current user are loaded in a cursor and display it on a grid. The user can send all drafts at once. After the drafts are sent an UPDATE in the emails-table is performed:

Code:
FOR i = 1 TO ALEN(arrIds)  && arrIds contains all the IDs of each sent draft
   UPDATE emails SET sent = .T. WHERE emails.id = arrIds(i)
ENDFOR

As update locks the whole table, when a different user at the same time is sending all of his drafts, it could cause a conflict, because both try to update the same emails table. But this would not be the case when you would instead use SEEK to find the email and then REPLACE, because REPLACE only locks the record and therefore records of the other user are never locked.

So without using buffering, it's dangerous using UPDATE-SQL in a network, no? Would it need a RETRY in the error handler to solve this conflict?

How would you handle this example with buffering/locking, to make sure, both UPDATE commands are executed?

Thanks,
Manni

 
ManniB said:
Without buffering I noticed, when a user writes inside a textbox, then the record is locked the whole time until the valid event triggers.

That's not the case when you neither lock manually. Automatic locks only happen when writing, so that would be initiated after valid returns .T.
Remember the editing of a control value is "buffered" anyway, editing the control only changes control.value until you leave the control and valid allows it and stores the value back to the controlsource. That's what I highlighted in my first long answer.

When you experienced that, you must either have locked manually, used pessimistic buffering, or have started a manual transaction. Automaticc locking is always very close to writing, and that's not done by modifying the control.

The behavior ou describe happens with pessimistic row or table buffering, as the "pessimistic" in that buffering type refers to be pessimistic about conflicts and therefore better lock whatever you start editing, so no other can edit at the same time and thus there will be no conflicts of submitting changes. While it sounds like a good option it also means excluding others from making any edits for the whole time until that buffered change is committed. And that does not always happen with valid returning .t., not even in row buffering, you then have to change record to trigger the save of buffered changes and the unlock of the saved record. Or you have to do TABLEUPDATE.

There's not much to worry about updating conflicts, optimistic is usually the better choice. Because even if tableupdate returns .f. and you cant store your change, that doesn't mean its lost or your deadlocked and can't do nothing. It only means there was another change in the same record that you now can see by getting CURVAL().

Say you discover a person named Smitt that should be Smith and you change that, but somebody else also sees it and changes it, but commits hos change first. Then you get an update conflict. It's nothing world shattering. You load CURVAL and see the other change already is the same change you wanted to make anyway, so you can simple revert your change with TABLEREVERT, you get out of the dilemma and can continue.

If the already saved change was indeed making it worth and made Mitch out of Smitt, you can also decide to do a TABLEUPDATE of your change with lForce=.T. and override the other user change.

Of course, the user can't program this. So, well, you have to program a little conflict management. I think many people avoid buffering as it means needing to program a bit. It's not that massive a task.

Chriss
 
I can tell you an anecdote that resulted from a customer insisting on locks, i.e. pessimistic buffering. It was not rare for a user to have a lock for very long. Sometimes he even went to their lunch break. You know how this happens: You start editing, a coworker comes into your office asking you to join and you do, without ending your edit.

And then? As another user needing to edit the locked data you have no power, not only because you can't program something, even code now can only see there is a lock, but not who has it, who has to end his editing. That's not even easy to determine with administrative OS tools. So the customer asked us to implement some metadata construct to log which user edits which record so this can be looked up.

It's fantastic from the perspective to have more work and earn more, but the simpler solution is to not insist on locking. "Pessimism" causes more problems than it solves from that experience. If you instead have the chance and give the user the option to decide for the force option if the unforced tableupdate does report a conflict recno, then an urgent change can be made, you can know what change the other user wanted, and can log notifications of what was overridden, which you can offer to anyone to see and check. In conjunction with edit logs you can even inform the user about the override.

You can also decide to use the force option as default. But that would be the other end of bad practices because you never know which edits you override.

What you do should be considered from the perspective of the functioning of the business logic, though, not from the technical perspective only. In a seat or car or another reservation system, it is indeed important that you never have two reservations for the same thing. So you will ensure this with a lock that prevents even the chance of an override. An these thoughts have to be prioritized over technical thinking only.

Chriss
 
ManniB said:
So without using buffering, it's dangerous using UPDATE-SQL in a network, no? Would it need a RETRY in the error handler to solve this conflict?
No, if an UPDATE does not get the table lock it waits and retries. There is no giving up and no error about UPDATES failing because they don't get their write access or table lock. Andremember write access and table and record locks are all different things.


Chriss
 
And also note buffering and locking are two separate things, write access is a third on, that's on the low OS an file system level, while all locks, manually or automatic are merely a VFP internal thing.

Buffering and locking are only combined because CURSORSETPROP only has the "Buffering" that controls both things. As you have the choices of record vs table buffering, which just means the buffer can hold only one or multiple changed rows and in row buffering a change of row causes an automatic, implicit tableupdate of the row. The optimistic and pessimist part of it controls whether starting to edit causes record locks or these are not done, just automatic locks when you commit changes with TABLEUPDATE.

And just as an aside: The conflicts you get from a tableupdate have nothing to do with not getting a lock on the data, technically the update could already work, but VFP only updates if the CURVAL is the OLDVAL you still know, so it takes care you are really the only one who made a change since you loaded the data. That has nothing to do with a transaction or lock that is done during all that time. VFP can know three states of the same field and record, the value you originally loaded - OLDVAL - the value you have buffered - your current value, your edit - that's simply workare.field or field only in the current workarea, and the CURVAL, what is written in the DBF right now, be it the same as you loaded, OLDVAL, so there wasn't yet a change from some other session. It's not a technical problem of getting write access or locks, it's VFPs decision to judge that as conflict and it's a service to users that, if you use it, can know what others changed while they changed. It does not go as far as it could and can give you the buffered changes of other users. Each user has it's own buffer, that's not a central instance, that's not a parallel file or filestream, that's part of the workarea, and they belong to each process.

So TABLEUPDATE conflicts don't point out technical or network problems at all. You also can design your system to rarely even get those conflicts. Design your database in a normalized form and you rarely have giant records with any and all aspects of something that editing an object means editing the same record. In normalized data the chance to edit the same record simply goes down as you have multiple records, sometimes even in 1:n::m hierarchies.

Chriss
 
ManniB said:
As update locks the whole table, when a different user at the same time is sending all of his drafts, it could cause a conflict, because both try to update the same emails table. But this would not be the case when you would instead use SEEK to find the email and then REPLACE, because REPLACE only locks the record and therefore records of the other user are never locked.

You haven't understood or learned/remembered what I already said about writing to files. VFPs lock mechanisms are unimportant in comparison with the overarching need to have write access to a file. Only one process anywhere in a network can have that at any one time. That also means no matter if you UPDATRE or REPLACE to make changes, at that moment your process locks out any other process anyway.Both REPLACE And UPDATE never fail on this but retry until they get write access, and it's guaranteed this happens, as that's actually the only component that is handled by a server side process that has the file service role. That process that hands out file handles, it sequentialises and processes requests. So you queue up here with any command that writes.

There are only a few things in VFP that immediately react to not getting access to a file, that's mainly USE. Which is causing the only situation where an UPDATE can fail: When it needs to first open the file as it isn't yet open and the file is open in another process and/or client exclusively. But then the update doesn't fail on the write access, it fails on overall access to the file at all. A REPLACE can not really fail that way, as that's a command you do on a workarea name, and that already has open one dbf or view or cursor. It can of course fail on a cursor you created readonly, for example using an SQL query INTO CURSOR somename NOFILTER. These situations then cause failures on getting the overall or the write access, not the locks.

There are of course a lot of situations you can try out, especially since you can also manually lock with RLOCK, that Id have to test, whether it let's a REPLACE try indefinitely or error or just end with _TALLY=0, though there would be 1 or more records to replace by the FOR clause.

Chriss
 
Last not least, as continuation of the last thought...

myself said:
There are of course a lot of situations you can try out
I had different experiences within the networks of different customers, there is no general better way. I have experienced locks to be seen by some but not all clients. Usually the client that did the lock saw it surely, but others didn't.

And I know the mechanism by which VFP locks records, that's described in the foxpro wiki, which is down on purpose. But here's a link from the waybackmachine:

This discusses how the way VFP locks can lead to shooting into its own foot. It's a bug by design, so to say.

But it tells you one thing: Making a lock actually is some file operation for which you first need access, so the OS restrictions on handling files are always the first hurdle to take, not the locks. The way a record lock is described there, is that it's made at a position of the file that isn't actually the bytes of the record you want to lock, otherwise you would lock out anyone, also yourself. So record locking is just by a convention of doing a lock at some position all VFP processes comply with.

Another consequence is that shared access is mainly read access and by the time you actually write you need exclusive access temporarily, there's no way you write in parallel from several processes, not even in non-overlapping file portions and even not, if the drive is SSD and would theoretically be able to write in two different file positions at the same time without needing to reposition a write head physically.

All write access is sequentialized, all write requests are queued, to get into that queue you at least will need to have read access to the file already, otherwise the only situation that hinders a write operation of any command like APPEND, REPLACE, DELETE and SQLs similar statements INSERT/UPDATE/DELETE is when someone has exclusive access to the file. In all situations you already have access your requests are queued, i.e. a REPLACE or UPDATE may wait long until it's done, but it won't unqueue or fail.

I bet if you bring this to extremes you can also hit timeouts or deadlocks. Well, as I said, I had different experiences, they don't just depend on how VFP works, there are also so many network machnisms by OS, file system, hardware, and drivers VFP does not control, that it can behave differently in a different setup. It's good to have a plan and rigorously test it.

For example, one UPDATE vs lots of REPLACES, as you keep asking: Try it out. Usually, it's simpler to get a bowl of soup with a trowel and not with a spoon. And then it's not practical to eat the soup with a trowel, which doesn't mean the spoon is more performant, but it is still more appropriate.

Chriss
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top