Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations TouchToneTommy on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Cloud/Online and/or LAN file storage for Foxpro data

Status
Not open for further replies.

dmusicant

Programmer
Mar 29, 2005
252
US
I have a couple of VFP apps that access, store and change data on a regular basis and I do this from 3 different Windows machines on my LAN. I currently have the data centrally stored on a server machine. I've been having some issues including suspect HD's, failing HD's, primitive and irregular backup methodologies, sometimes slow or otherwise unreliable data access, and lately a couple of inexplicable Memo file missing or invalid events that had me restoring backups and scrambling to find or somehow replicate missing records.

I'm thinking of two things, one being to either buy a NAS of some kind (determining just what to buy is a daunting process for me, something I guess I can do, but it's going to take a bunch of research, which I've started... barely!), buy or build a server machine superior to what I've been doing (having a 2TB USB HD attached to a spare laptop, and subsequently the same HD attached to my recently acquired Asus RT-N66U router, which may be the source of the file corruption problem, so I've moved the HD back to being attached to the laptop). The other idea is to keep my data (probably with occasional backups on my end) in some kind of online storage, such as Dropbox or Google Cloud or some other. Hopefully, I could access, add to and modify online Foxpro tables from any of my local machines in real time without difficulties (?).

The amount of data I'm talking about is:

App 1: ~200MB
App 2: ~750MB

These include FTP's and TBK's

This is the data that is most precious to me!

On occasion I take one of my laptops with me and want to access this data. My solution for this so far is to copy fresh data to the laptop before leaving the house and have the apps use that (I often take it to places where I don't have wifi anyway, but sometimes to places where I do have wifi).

I have a Dropbox account, but so far I've only used it to give access by groups of people to some media. I've read some reviews of a few NAS's at Amazon, mostly Synology. I figure mirrored RAID would be a good idea if I'm going to do it all locally, as well as off site storage on portable HD's.

What are people's experience with these issues? Can I get recommendations, caveats, etc.?
 
1. TBK files are generated, when you alter a table, they are copies of the fpt files from that point, but they are no live data. You can throw them away once you edit one memo of the dbf, as their data will then not match to at least this memo field, if not more. So they are only useful if the altering process fails and you reconstruct the original table before the altering. So if you use it as a backup later after some changes, you introduce the seed of broken, missing or invalid FPTs, the TBK is really just for that case of a failed ALTER. Of course you can always restore BOTH .BAK and TBK files, but of course that's nothing else but a restore point of the time right before the ALTER.

2. No, cloud space will not enable you to USE or query it like from any network drive, but of course you can use it for backup purposes.

You can of course USE a DBF in your dropbox folder, or Google Drive or One Drive or whatever, but only because it is indeed a local folder with local file on your hdd and Dropbox or whatever cloud space is just your shadow copy of your local files. Each record changed in a dbf/fpt/cdx will cause the files to be out of sync with the online cloud space and therefore be uploaded. And not just this one change, the whole bunch of files and all 100% bytes of it. Dropbox and the other service have no idea what in the file needs to be updated, as they don't have any idea of their inner structure. So this causes a lot of traffic to the cloud space and you won't have a good backup until you finish working with the tables locally and let dropbox finish syncing. The best usage therefore is to stop dropbox while you work on the data and sync with the cloud afterwards.

I would recommend changing to MySQL data, if you want to keep this data online and profit of a better situation with hosters data centers and backup plans than doing it all local. Just a small warning, my ex boss was using one of the cheapest hosts and didn't learn out of two cases of hacked site and lost data (no backup was created for a whole server with other customers besides us) and so invest more than just a few USD a month.

Bye, Olaf.
 
If I move the data to MySQL, could I use VFP as a front end and run all my code? There's a lot of rather complex things going on including interactive reports in App 1 that I generated with FPW (close to FPD) type PRG's. Much of the code in App 1 is totally independent of FPW screens even. Both apps run in FPW 2.6a, they have just been made to work with maybe some minor adjustments for Visual Foxpro. The I'm not keen on discarding all that code.
 
Remote database, no matter if MySQL or SQL Server, require a rewrite of data access, obviously. But you get cursors as query result, readwrite cursors, and they can be used like DBFs from that point on, you just have to finalise writing back with TABLEUPDATE(), just closing these cursors as you can do with DBFs, won't update the remote database. MySQL of course has the advantage it's available quite cheap and also accessible remote at some hosters (you have to watch out, some hosters only allow local access from eg php scripts on the hosting space).

It's possible to reuse much code, but it's easier, if you already have a data access layer and query data via sql queries or views, instead of binding controls to tables directly. Are you using some application framework like Visual Extend, Promatrix, Codebook, Mere Mortals? They all have bizobject classes, which centralize all code done on a table and so via subclassing you could change access to MySQL. It's even easier, if they already offer an implementation for SQL Server.

Migration of data is not very hard, but there are some things which are VFP exclusive, eg empty dates. At least MySQL knows a date 00/00/0000 00:00:00.000 that compares a bit to that. You may have some other quirks, but nothing mind bending or unsolvable.

There is quite some work in it both per form and per table, how much work depends on your current implementation. A simple way out for the start to reuse more code is using a cursoradapter reading full data into a cursor with an aliasname as the DBF had previously. That will almost act as if you had the DBF, but in contrast to USE of some DBF it would copy over all data from that table into a cursor with your internet bandwidth. So next steps surely are filtering data you really only need.

So a faster start surely is only using cloud space for backups. How large are your files? One advantage of database servers is not having the 2GB limit.

And there is a good reference book on VFP with MySQL from Hentzenwerke.

Bye, Olaf.
 
I'm not using a framework. I employ the USE command in general on DBF's. There are a few SELECT commands in there in places, not a lot. App 2 sets a filter on a memo field based on 1, 2 or 3 variables, then browses the table or does a MODIFY MEMO if only one record is returned.

I don't know why I'd want to make the adjustment or rather rewrite to a MySQL or SQL Server back end. It would be nice to get the experience, but don't understand why I'd want to do it. I can backup to a cloud, yes, without converting.

So, it's impractical to try to work with Foxpro tables via the internet, I take it. There's just not enough bandwidth? Why would MySQL be workable and Foxpro not?

As I say, my data is currently about:

App 1: ~200MB Total files including DBF's, CDX's, PRG's, etc.
App 2: ~750MB Total DBF's, FPT's, unerased TBK's, CDX's, etc.

App 2's largest table's FPT is about 22MB. App 1's largest file is a 16MB CDX file for a 12MB DBF. App 2 is simpler in code but has a great deal more data and relies on filtering (usually a memo field) to winnow out the records to inspect.

I may just stay with Foxpro tables, work locally, mirror drives in a NAS (or server machine), and back up to USB HD's, some to be stored off site periodically. That would be the easier way to proceed. Backup in the cloud would be cool, especially for access remotely, and as an additional means of securing my data.

Maybe I'll get that Hentzenwerke book, I have a LOT of their books.

Thanks for the help and suggestions, etc.!!!
 
So, it's impractical to try to work with Foxpro tables via the internet, I take it. There's just not enough bandwidth? Why would MySQL be workable and Foxpro not?

You're conflating a couple of different things.

The single most important thing to Foxpro in using a DBF is a single, consistent, persistent connection (or file handle) into the dbf. It will use a file handle for each file it opens. In order for VFP to maintain its state (i.e. current alias, current record), the connection MUST NOT BE INTERRUPTED. (This is why a flaky network can wreak havoc on VFP files.)

That's not the internet. The entire internet is based on a stateless pitch/catch. You toss a request at a server and it returns a result. The next time you toss a request, that server doesn't know you from Adam. You could be an entirely new visitor. There is no state.

A database server (MySQL, SQL Server, etc.) doesn't require state (and in fact doesn't support it). There is no SKIP in the SQL language because that would require the database to keep track of your current state which it does not do.

A database server DOES require a connection, which does not need to be persistent. IOW, you work with it without a persistent state.
 
Dan has explained it already about the technical reasons. The Internet is no file system at all, you can't USE a table located at some URI, but VFP DBF access depends on a file system with NTFS, locking mechanisms and more that's not applicable to URIs again, even if the server is a windows server, the http protocol is not the protocol used for file access.

The reason for MySQL and MSSQL are - as you asked for - remote storage of your data - with the additional benefit of working on it, not only storing copies/backups there, being able to work with the data from everywhere. If you merely want to backup vfp data the question is, why you ask for "Cloud/Online and/or LAN file storage for Foxpro data" specifically. From the backup perspective file types don't play any role, why would they? You can store anything on Cloud Space, if you specifically ask for DBFs, my reasoning is, you want to actively use it there, too.

Bye, Olaf.
 
Hi,

I have a Synology NAS (2*2TB disks RAID5) with all my VFP data. Me and my coworkers access them at the office via LAN and from outside via https WEBDAV (please have a look at WebDrive.com). Up to now things work well.

hth

MK
 
If going in that direction you can actually add a ftp server via "add network drive" in Windows since Vista (I think), for sure since Win7. You just have to take the option to specify a ftp URI and user/password. Then that FTP space will look like a drive, you can use it as any drive letter or share in Windows Explorer and a double click on a word doc will open that, but it will actually download the file and work on the download, and with a DBF it wouldn't also download the CDX and/or FPT. I'm not sure how WebDav clients work, as I don't used them, but from what I read they also only use GET/POST http requests, the only advantage over FTP is, WebDAV only needs port 80.

Bye, Olaf.
 
I was learning port 80 from the german version of the wikipedia article:

Durch die enorme Verbreitung des World Wide Web zählt der von HTTP genutzte Port 80 zu den Ports, die bei Firewalls in der Regel nicht blockiert werden. Während bei anderen Übertragungsmethoden wie dem File Transfer Protocol (FTP) oder SSH (in Verbindung mit scp oder SFTP) vielfach zusätzlich Ports der Firewall geöffnet werden müssen, ist das bei WebDAV nicht nötig, da es auf HTTP aufbaut und daher nur Port 80 benötigt.

translated:
Due to the enormous spreading of the World Wide Web used by the HTTP port 80 is one of the ports that are not blocked at firewalls usually. While other transmission methods such as the File Transfer Protocol (FTP) or SSH (in conjunction with scp or SFTP) many additional ports on the firewall need to be opened, which is not necessary with WebDAV, because it is based on HTTP and therefore only port 80 is required

Like you can change port of an IIS webapp you can also use other ports, obviously, as eg 5005, anyay, WEBDAV is not SMB. It may still be fine to have easy file access, but I doubt you can actively work on the files directly with VFP, just like with directories you access via FTP.

Bye, Olaf.
 
Before I go and buy a synology NAS, do you have such a thing at hand and mapped a drive via WebDav? Say as Drive Z?

Could you then please USE Z:\some.dbf in VFP and see what happens? How about RLOCK() a record and try to write to the locked record in a second VFP instance? Does that work? That's what I would demand from this kind of connection, otherwise I can merely access the files to copy them, work on them and then store them back.

Besides that, of course a synology NAS is a thing you could buy, which would be the alternative option to cloud space as locally a NAS can simply be mounted as a network drive with all the usual ways to work with a network share.

Bye, Olaf.
 
Hi Olaf,

Setup is as follows: Laptop/PC's (VFP App + Runtime Files) -> WebDrive (connects to WebDAV server via https + maps drive) - > Synology NAS (WebDAV Server with my VFPData)

ad your questions
1) Yes
2) I have no problem to open the VFP-APP as many times as I wish - WebDrive retrieves data from NAS once and caches it locally - modified data is displayed instantly in the other instances of the VFP-APP - but you have to make sure that the cached data is uploaded in order to update the original data - which might cause an update problem if meanwhile somebody else has worked on that data.

Although not perfect and by no means a fully fledged client/server application, it might still be worth for dmusicant to have a closer look at it. He could even store his data on a NAS and access them via network share.

I have a couple of VFP apps that access, store and change data on a regular basis and I do this from 3 different Windows machines on my LAN.

hth

MK
 
I don't know what you answer with 1 and 2.
I assume "yes" to my question "what happens?" means you can USE a DBF
I still would like to see how RLOCK() behaves.

Bye, Olaf.
 
I have been using VFP itself (not the runtime), on several networked machines (some wirelessly, one by ethernet) running on data that's on a USB HD that's connected to one my my networked machines. Moving that HD to my router has brought up file corruption issues, so I moved the HD back to a machine (laptop). I don't want to keep using that laptop as a server, so an NAS or a server machine with mirrored drives is on my shopping list. Other than the corruption issue mentioned, I've had more or less decent luck having my data on my LAN, except that at times VFP hasn't found the data. In those cases I have to "waken" the network, which I sometimes accomplish by clicking on the mapped drive in Explorer, yes a PITA. I'm hoping that with a NAS I won't have that problem. I realize that this might have to do with drives going to sleep, which saves power and MTBF, drive failure being a real issue.

Which Synology NAS do you have, MK? The most popular NAS HD's right now appear to be Western Digital Red HDs. Are you using those?
 
Hi Olaf

USE Z:\vfpdata\wunnraum\students.dbf ALIAS st1 IN 0
USE Z:\vfpdata\wunnraum\students.dbf ALIAS st2 IN 0 AGAIN
BROWSE
SELECT st2
BROWSE
?RLOCK() - returns .T.
?RLOCK("ST1") - returns .T.

Hi dMusicant

DS212+ with 2 * WD2002FAEX Black 2TB RAID 1

MK
 
You showed VFP locks the same record (the top record) of the students.dbf twice, this is possible only from the same instance of vfp because you already have the lock anyway. But it doesn't prove anything.

I want to know if the lock works, what proves it is, if you can't write to the locked record from a second instance of VFP, RLOCK returning .T. doesn't prove there really is a lock, it may just be local on whatever WebDAV chaches. So start a second VFP instance, open the same table and try to write to a record locked in the first instance and then see, if that write fails due to the lock. That's what I proposed in short earlier.

It would be ideal, if that second VFP instance is on another computer also attached via WebDAV, because in a second instance of VFP on the same computer the lock may also merely be local. As far as I read WebDAV and other protocols are caching things locally and that wouldn't necessarily make locks work for other users, really. If manual locks don't work then automatic also won't work and in a real world scenario with several users, you'd sooner or later get corrupted files from multi user access. WebDAV then would only be a solution for a single user needing access from anywhere.

Don't get me wrong, this already is beyond my expectations and looks nice, but I'd rather use a SQL Server for data I want to get to from anywhere. Such remote file access protocols are good for word docs or anything in that direction of documents in general, also graphical files, any file used by one user at a time only. Not for databases.

Bye, Olaf.
 
Hi Olaf,

WebDAV then would only be a solution for a single user needing access from anywhere.

YES and that's what I wrote in my previous thread

... but you have to make sure that the cached data is uploaded in order to update the original data - which might cause an update conflict if meanwhile somebody else has worked on that data.

hth

MK
 
I just reread the discussions in this thread, which I think are very helpful.

A couple of weeks ago I ordered a Synology DS214play NAS, and just yesterday was satisfied with my installation and began copying my data (~350GB) from the 2TB USB HD that's been home to my Foxpro and much of my other _important_ shared data to date. So far so good as far as VFP goes, my apps (described above) appear to work fine. It's too early to declare any kind of victory, but with fingers crossed I must say that I'm pleased. No annoyances yet.

I have two Western Digital Red 3TB HDs, mirrored in RAID1 in the NAS, so I have a level of data protection I've never enjoyed. I have suffered several HD deaths in recent years after many years of being charmed. I have to wonder if the manufacturers of HDs have fallen off or if the Western Digital ELEMENTS series is simply intrinsically flawed. I plan to make regular backups to 3TB USB HDs to store off site. Doing that from this NAS should be quite simple.

The Synology OS (DSM, now in version 5) and the collection of apps included and downloadable is extensive. MySQL and PHP support is supported for remote access, maybe something I can use, I'm going to look into that. I'm just getting started in using the NAS, got off the ground today and I like it. Can't express the frustrations I've been having with inconsistent access to my server data before today.

I kind of jumped in with both feet yesterday deciding to just plop my data on Disk1 of the NAS, haven't looked into Cloud Station, the Synology app that syncs files between networked machines. If used (if I understand this correctly), it would have data truly local. Obviously, that would make Foxpro very snappy, but so far I haven't seen speed issues, but it's early and I may look into Cloud Station if I do see speed/latency issues that bother me. I am working on an ethernet connected machine, obviously when I use the wifi connected machines it will be slower. There may be other benefits to Cloud Station. I only have my toe in the water at this point, haven't looked at the apps, native and downloadable.

I got this DS214play NAS, not so much because of the vaunted (hyped?) transcoding functionality (which I may use), but more so because of the much better 1.6GHz dual core CPU (the DS214's is 1GHz), and the 1GB RAM (the DS214 has 512MB).

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top