Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chris Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Mysterious VFP event

Status
Not open for further replies.

dmusicant

Programmer
Mar 29, 2005
253
US
I created a new field in one of my Foxpro tables, a numeric (6,2), browsed the table and hand entered about 45 values in a browse window. The browse runs inside a complex PRG. When I went to close the table, VFP hung and I had to close it by killing it. When I went back to look at the data, it was all missing. The new field was there, but the values I entered (some 45 of them), were not there. In my experience, when changing data in Foxpro and you move to a different field in the table, all the time being in a browse, the table is updated automatically. What could have happened? Yes, I can reenter the data (it required looking up the values on the internet), so I have my cure for the problem, but I'm upset that I have to do so. What could be the matter. BTW, VFP 9, at least concerning this app has been acting up lately, is very extremely slow (taking over a minute, if it gets there at all) to access the tables on my NAS. Is there something I can check to fix that? It's just the last few days.
 
I don't know how to stabilize the network. I have no idea why it becomes unstable from time to time.

You appear to be saying I can run Foxpro directly on the network data. What's unstable about running it on local data and just copying it (when changed) to the network, where the other machines can download to local versions before using?
 
I'm saying that writes to an unstable network will still be unreliable whether it's via Foxpro or through a file copy. It's still writing to an unstable network resource. All you'll have added is extra overhead.
 
Dan, I think it still might be a workable way of dealing with this. There are several advantages, I believe.

VFP will run quickly on local data. I access these tables a lot, much of the time I don't even change the data. In instances like that, the network isn't involved at all if I'm running on local data. But if I'm running on data on the NAS, I have to get the handle on the network data, and I have been experiencing a lot of latency. This is a biggie.

If I institute my idea, my most used local machine will have almost entirely up to date data. It's a laptop and I can make a quick check to see what might not be up to date and bring it up to date before taking the machine off site.

I haven't been noticing that the network is unstable except in this instance of my changes not being ultimately recorded. It seems like an anomaly. However, the latency problems are very common, they are variable, from a few seconds to maybe as much as 30 seconds sometimes. Even 2 seconds latency is an annoyance when I can engineer a solution that eliminates perceived latency. Copying a table that's been updated on the local machine to the network should, I believe, happen outside of my personal observation, in the "background" as it were.

There will be some extra overhead, but my network isn't besieged by multiple tasks, so it should be able to handle it easily.

Like I said, I don't have any idea of what I can do to stabilize the network. There may be something, but I don't know what it is.
 
There is one other advantage of working on local data and putting it on the NAS merely to forward it to elsewhere: You have a local backup of the last state you worked on. Or you have the backup in the cloud, depending on what you see as the live data and the backup.

If you work on it with multiple users you're working on separate local copies of the data, the danger is concurrent usage of the application creates different tables you won't be able to merge just by putting them into the cloud via the NAS. Two files arriving there are not merged, but the last file is stored and overwrites the previous file. This again means "any change wins", depending on the user submitting hos changes last. If that editing session was based on todays data or the data of last month wouldn't matter, and that's the danger!

What you do would be like database replication and that's not trivial in it's own. You couldn't do the fine granular database replication a SQL Server can do with forwarding each transaction done in the master database to slaves at other locations. You'd need to work on your database, as if it's a document only one user at a time can write to and others have to wait and only read the last state saved to the cloud.

Bye, Olaf.
 
Hi dmusicant,
I have the following setup

at home: a shared folder on the NAS called VFPDATA + its subfolders - the VFPDATA folder is addressed as a networkdrive S:\ - three PCs/laptops work on the data concurrently - its a GB cabled network

from outside: my coworkers have READ access (that's all they need) to the data on the NAS via secure WEBDAV, - (please have a look at WebDrive o.s. which you have to install on your remote machines).

The apps (EXEs) are of course installed on each local/remote machine.

This setup works fine for us/me for years now - knock on wood!

hth

mk





 
I'm having spotty performance. Sometimes I get pretty quick access to the data, but occasionally I have to wait 30 seconds or more. It's really not acceptable. Since I'm the only one working on the data I don't have to worry much about concurrency. The only issue I can see is if I have a table open and haven't saved changes (e.g. am in a memo field and forget to close it before putting a machine into suspend). That's something I don't do much anymore, and when I do it's usually the case that I haven't made any change to the data, just needed to see it.

Working locally should be extremely fast (my files are small). It will take some doing to set up local access to the data, cloud storage (forward any changed files to the networked data), checking from local machines if networked files are more current than the local files, and downloading if necessary. It's a paradigm shift for my applications, but nothing difficult.

Yes, a dividend here is that the local machines will have current data, so it acts as a supplementary backup solution. Well, I will probably set it up so the local machines will update a table only when it goes to use it. However, I could instead set things up so they update everything that's changed as soon as the apps open, not hard to do. Or, I could write a function that does it at my command (download everything new). I'm tired of the performance issues, and I think this will fix that, and it should prevent any reoccurrence of the NAS caching changes and never writing them to a table (the only explanation that makes sense to me for the problem I explain in the OP).
 

Muse said:
14 Oct 14 16:43
I'm having spotty performance. Sometimes I get pretty quick access to the data, but occasionally I have to wait 30 seconds or more.
I think I've come up with something on this. The app that's exhibiting this comes back to its screen, the only screen, which was generated in FPW 2.6a. I was in the habit of putting machines to sleep while VFP (running this app) was at that screen. Awakening a different machine and running the app appears to bring in the issue of slow performance, at least sometimes. I think the issue involves the fact that a table is open when at the screen on one of the local machines that has been put to sleep. There are a number of tables that could be the open table. It's a table of metadata, which particular table of metadata depending on what category I'm dealing with. I figure if I close that table (only open it when initiating action at the screen), the performance issues will disappear. I figure that having to open the table when initiating action shouldn't have much if any performance impact. If that fails to resolve the performance issues (I doubt it will) I can either close the app every time before putting a machine to sleep (what I've been doing lately) or, as discussed above in this thread, resort to running everything locally and updating _mirrored_ server-side (i.e. current data repository) copies of the data on the NAS for informing the various local machines on the network of the latest data.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top