Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Is there a proper way deleting record if application is used in network? 2

Status
Not open for further replies.

Mandy_crw

Programmer
Jul 23, 2020
578
PH
Hi everyone… my application i accessed through a network, all client client computers may delete or add a record… i dont know if this is a glitch, but every time a record is deleted, and added again with the same idnum, when chosen in dropdown list, it says “idnumber not found” or record not found…may i ask how can i address this issue? Thanks…
 
I am wondering if you mean you delete a record on workstation A and you are using a dropdown list on workstation B and lo and behold the
thing you select on B has vanished?

You need to refresh the dropdown list on B before you make a selection - how often do you delete and re-add these records?

Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.

There is no place like G28 X0 Y0 Z0
 
Griff is right about the case a record is deleted that is bound to the combobox (dropdown list). But one thing is sure: Reusing an idnum is an error, don't do that. If your idnumms are inteegers, make use of the autoincrement feature of integer columns and insert new records with new idnums, even when they are old values.

If you tell us more about the records and the reasoning for deleting and then reusing them, we might also come up with a better way of doing that. For example, you could also introduce a logical field like lInactive to deactivate a record.

If you have a very volatile table (with many changes happening to it from different clients) that is used as the dropdown list, you may also not bind the dropdown list to the DBF itself but to a cursor of a SELECT * FROM dropdownlist INTO CURSOR DropDownvalues and then a) are not influeced b any actions on the DBF itself in the first place, but therefore b) have to check, after a choice is made, if that record still exists in the DBF and if not update the dropdown list for a new selection.

To do that you could use the interactivechange event of the combobox and check the existence of the selected record then. if it doesn't exist warn the user, requery the cursor, and let the user pick another record. Also at the valid event you should check whether that record still exists.

Overall, it would be much more ideal to not need these checks, so what are these records and why are they changing so much and causing so many conflicts?

In case of seat reservations in a cinema, for example, you would perhaps use locking, so no two clients could reserve the same seat.



Chriss
 
It seems to me the inverse problem of your thread thread184-1824310 where you don't see new records.

In general both INSERT and DELETE simply work as they should and shouldn't cause only one or some clients to see the change. A deletion of a record also is just an update of the dbf, just like a new reccord is. You mimght have network problems, simply. I can only advise you to examine exactly what's going on. In my experience there is no one soluton that fits all network situations, networks do behave differntly and then there is the dreadedd oplocks problem still not fully resolved, I think.

The general advice is to use buffering. That in itself must make you cautious about changes that could have happened from othr clients as you have to stop thinking of having a live view of a database anyway. Buffered changes are local only and onl get committed when the client saves. So you always have to be prepared that a database change conflictss with your own.

That speaks against buffering, if you only look at it from that perspective only, but on the other side, your local data situation is stable and fully under your control, you should rarely have conflicts anyway.

So using buffering, no matter if you think and talk about INSERT/APPEND, UPDATES/REPLACE and DELETEs the only thing that finally makes things visible to any other clients is the TABELUPDATE() function, which means this function is your general "save changes" function, no matter what kind of changes.

Chriss
 
One very general point: Using DBFs you think you get a live view of all data, but the network still is the network, so access to data in a file that's not local can still be tricky and lagging. Some mechanisms like opportunistic locking, when they fail, mainly, and they still do, let you think you see data that's not actually in the file, but only in a local cache, which is outdated in comparison to what's really in the file.

You have one main mechanism in VFP to access what is in a networked DBF: SQL queries. By definition and by default and by non-changable, non-avoidable default the SQL engine of VFP, all versions of it, too, do open up a dbf file for a query, and that highers the chance to read from the actual DBF file and not just from some cache that should represent the file. There still are caching mechanisms of Windows and of VFP that could mean you don't get at the actual file data, but it's by far the best way you have. That's another reason to use SQL.

You could try to mimic the SQL behavior by closing and reopening a DBF file yourself, the SL engine will also just use a dbf again in an unused workarea, so that will have the same effect. Anyway, that means that using a dbf and binding controls to it in a networked application always leaves you without full control about what you really see in your workareas, as network caching mechanisms can fail and are not fully in your hand.

Theoretically using a DBF in a workarea should give you an automatically updating view of the data in the DBF, but in the worst case it doesn't. Not only because you don't see the locally buffered changes, that's by design and wanted, but even after the commit by TABLEUPDATE(), you might not see those changes, as you reported yourself in your older thread.

Chriss
 
Ok Chriss and Griff.... i will follow not to re use idnum... i think its less mistakes to commit if i will not re use the idnumber... Thanks for the learning and enlightenment... God bless
 
I'd summarize this as follows, in terms of best practices:

1. Don't reuse idnums. There's many more reasons for that than a networked application.
2. In a networked application make use of buffering (usually optimistic table buffering) with the use of TABLEUPDATE() at any point you want other clients to see the local changes, too. And that applies to any changes, inserts or appends just as updates or replaces and also deletes. You don't need to ask for all operations separately.
3. As a consequence of 2) also keep in mind at any point data could need refreshing because other clients commit data and refreshing views or queried data in cursors is not automatic. That is a disadvantage in comparison with automatically having the latest data in a DBF you open directly, but as you can see using a DBF to always read the current state of it may not work out as it theoretically should, because of Windows caching mechanisms.

One technical detail about this: The ideal way of caching for a database like VFP with its DBF files is called write-through caching. That means whenever there is a change to be written to a file it should go straight to the file and the write operation should not wait, so other clients at least have the chance to read the new data. The opposite of that means writes are cached until there is a read operation. And that's where Windows even fails to get it right, sometimes. Even in the good case of write-through caching the reliability of the newest data being in the DBF file does still not mean it is read from clients, their cached data of the old DBF state has to become invalidated so they read the new state. And that's not necessarily happening reliably.

That's at its core the problem of networked applications.

With table buffering, it seems you even worsen the problem as you keep the buffered changes to local clients. But just think of the difference between changing an unbuffered DBF to actively calling a TABLEUPDATE() to cause the write of changes. If you change anything in an unbuffered DBF you just rely on Windows, the network and the file system to write the change. Windows is quite eager to not do things immediately in case more changes accumulate and can be done in one go instead. With a TABLEUPDATE VFP puts more force into the need of writing out changes. You still also rely on the file system and the actual hard drive controller to write changes, but in my experience, using TABLEUPDATE() is more reliable in forcing changes to actually happen.

So even if you want changes to happen as soon as they happen, you should use buffering and in that case follow every change you make with a TABLEUPDATE(). It's like a FLUSH FORCE after each DELETE, APPEND, REPLACE. But what's more worth TABLEUPDATE() returns .f. when there were conflicts. That's not only about technical problems, but let me stop here, that's details for another time. Your worst enemy still is opportunistic locking of Windows not under your control, but actively committing changes by TABLEUPDATE() works out better than anything else. And that requires to buffer in the first place, even if just to immediately TABLEUPDATE().

And now that you asked what's the best way to insert update or delete data, it has the same answer to all three operations, they summarize to any changes to tables, they are done by TABLEUPDATE() of buffered data. So a INSERT/REPLACE/UPDATE/DELETE into a buffer followed by TABLEUPDATE() points out the need to the file change with more force than INSERT/APPEND/REPLACE in an unbuffered DBF.

Chriss
 
There's much to learn, it also is condensed in this chapter of the VFP help: You also find this online reference in your local help file.

That all was written without the knowledge and background of the opportunistic locking problem, but it has all fundamentals for how shared data access works and also points out buffering and not only using DBF s shared.

Chriss
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top