Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Any way to use/save VFP 9 database/tables to OneDrive or other cloud solution 2

Status
Not open for further replies.

ostrowlaw

Technical User
Dec 4, 2009
18
US
I know I'm probably a bit late to this party, but I'm not a programmer, just a user. I run a small law office with just a few users. I started out using dBase II back on a Kaypro II CPM machine in the early ages of dinosaurs (1980s), and I've developed my office database programs over the years, first to dBase III, then Foxpro, now its running on VFP 9. Its a fairly simple program, but it runs my small law office just fine, its stable, tracks my cases, does my billing, etc. When I need to make a change, I just change it and its keeps on ticking. . .

I'm getting up there in years and I would like to spend some time in warmer climates in the winters, but I plan to keep working. Last winter I tried using RemotePC to log into my server and run that way, basically, I get a snapshot of my office computer, but its clunky and disconnects. So, I was thinking, is there some way to host my databases, indexes, etc. in the cloud like on OneDrive or something, map the cloud drive as a virtual drive (currently I do my backups using Carbonite, but I know that's not a solution). If so, can multiple users access the databases simultaneously, will the databases lock as they do now, yes a million questions.

Thanks, just need some direction to get this project going before next winter.
 
Ostrowlaw,
While I haven't tried to run a VFP app this way, it is technically possible to map a OneDrive folder as a drive letter on a PC. To do that:

1. Open your web browser and go to the OneDrive website (2. Sign in with your Microsoft account if you haven't already.
3. Navigate to the folder you want to map as a drive letter.
4. Click on the "..." icon next to the folder name and select "Embed" from the dropdown menu.
5. In the "Embed" window, click on the "Generate" button to create an embed code.
6. Copy the embed code to your clipboard.
7. Open File Explorer on your PC.
8. Right-click on "This PC" and select "Map network drive" from the dropdown menu.
9. In the "Map Network Drive" window, choose a drive letter from the dropdown menu.
10. In the "Folder" field, paste the embed code you copied earlier.
11. Check the box next to "Reconnect at sign-in" if you want the drive to be mapped automatically every time you start your PC.
12. Click on "Finish" to complete the mapping process.

A word of caution though. You mentioned that in your remote warm beach climate, that "it disconnects". The same could still be true for you this way, if it's an issue of stability of your network. One issue here is, "ineloquent" closer of VFP tables can result in corruption of table data and/or indexes. Index corruption is easily resolved if you have some type of reindex routine (which first dumps all existing indexes, and then rebuilds the from scratch, and most of all DO NOT use REINDEX), but if you have table data corrupted then its a bigger issue.
I will also not the tables that are most susceptible to data corruption are those with MEMO or GENERAL fields. That corruption can be unrecoverable, particularly if you don't have a backup data set. (And realize that even if you do, it's only as current as the last backup. So if you backup every day, you could still potentially lose a full day's work. If it's less frequent, then even more so.

Would love to hear your experience with this. I might try it on a small test myself.



Best Regards,
Scott
MSc ISM, MIET, MASHRAE, CDCAP, CDCP, CDCS, CDCE, CTDC, CTIA, ATS, ATD

"I try to be nice, but sometimes my mouth doesn't cooperate.
 
I haven't done this, but the only way such cloud drives (also Google Drive) can be mapped as network drive is just like you can map an FTP server as a network share. The operations you then do on the files are actually causing FTP file transfers to local drive and that local file is used. In short, you don't get record locks and other things VFP does via the SMB protocol, as you don't really have a LAN mapping to the cloud files.

It's easy to answer that question by experiment. Make the test and see if it holds any water: Try to USE a dbf EXCLUSIVE - I think that will work okay, and then go to another PC, even better if it's somewhere else in the world also connecting to that OneDrive by mapping, and let them USE the same DBF. It should cause error 1705 "file accesss is denied". And my bets are on not getting that error. Because both clients will actually cause OneDrive to FTP file transfer a dbf (an d also cdx/fpt by means of VFP also accessing these files) and then open that local file(s). And that's always just used by one user.

And as things don't work on that coarse level of the whole file, you also don't get record locks. The whole operation on files is on local files and changes are synched back. And there is no merging taking place, so easily two users working at the same time sabotage each others changes.

Chriss
 
Chris,
I just mapped the drive opened a table, appended two records to the table, exited VFP. Went back in, reopened the table, the records are there.
That's the fist simple test. (And if he is the ONLY one using the app, which sounds like might be the case) this could still work.
I'm going to try something more sophisticated next.

Also the way I initially looked uop to map the drive letter that I mentioned above, didn't work. But what did work was:

Just login to OneDrive, and access any folder, look for the cid=<value> and put that in CIDNumber. It mapped to drive letter Z no problem. (Note that it also askes you to authenticate credential, so you'll need your OneDrive username and password, and if you want it to be persistent, click the "save credential" checkbox).
Next, I'll go for a TABLEUPDATE() on a file with pessimistic record locking on.

But if what I understand you are saying is, this would work, so long as there is only 1 user at a time?

Best Regards,
Scott
MSc ISM, MIET, MASHRAE, CDCAP, CDCP, CDCS, CDCE, CTDC, CTIA, ATS, ATD

"I try to be nice, but sometimes my mouth doesn't cooperate.
 
Chris,
Ok, so I just ran your test, and the result is contrary to your expectation.
I'm currently sitting in Chiba Japan. I mapped the OneDrive file to Z:
I started VFP, opened a data session on the table "COMPANY", with USE COMPANY EXCLUSIVE
Then using LogmeIn, I accessed my PC in Tokyo. Mapped the Onedrive the same way.
Started VFP. Tried to open the table with "USE COMPANY EXCLUSIVE" and got the expected message "Table is in use by another", indicating that the file lock IS in place even over an internet mapping.

I then released the table on my local PC.
Went back to the remote accessed PC, and tried to open the table exclusively there, and it opened. And now on the local machine I get the same message "File is in use by another".

I would say that reasonably proves, it's working?



Best Regards,
Scott
MSc ISM, MIET, MASHRAE, CDCAP, CDCP, CDCS, CDCE, CTDC, CTIA, ATS, ATD

"I try to be nice, but sometimes my mouth doesn't cooperate.
 
Interestingly I had tried this about 5 years ago, and it didn't work. But today, it seems things have evolved, and it is now working.
I will build a test set of the full data, and try running my local application against it. This is awesome actually, glad someone asked and I've started poking it again.


Best Regards,
Scott
MSc ISM, MIET, MASHRAE, CDCAP, CDCP, CDCS, CDCE, CTDC, CTIA, ATS, ATD

"I try to be nice, but sometimes my mouth doesn't cooperate.
 
That's good news, Scott. Indeed that indicates OneDrivve is actually mapped in a way that allows that usage. Would be interesting if something like that works with a Google Drive, but that starts with the question whether some mapping like that is possible to start with. So cudos to MS for integrating OneDrive with the Windows OS that good. At least they have that iin their full hand.

The first test, Scott, is something I would also have expected to work, that's also working if the file is actually downloaded and the change is uploaded again. It would just cause a single record change in a 100MB dbf file to a) first download 100MB and b) upload 100MB including the one record change, i.e. it would just be inefficient, but would work.

The next level test would be an rlock(). If locking the whole file works, it's not necessarily also working. But by the same principle: Find out trying. The answer is in the experiment.

Chriss
 
Dear ostrowlaw,

I can think of a way as follows and it's simple to maintain also.
I used to do this sometimes back. Currently, not sure, Google has changed anything in this regard.
However, you can give it a try.

Usually, these services (say Google Drive) provide a locally installable Sync utility which gives most of the features of its cloud version. If using Google Drive for backup, a typical setup will be like this (you should have a Google user account of course)

1. Download Google Drive Desktop and install.

2. Once it is installed, I believe, it configures itself to place on windows task bar and starts running auto.

3. It has a feature to designate what are the folders in your local drive to be synced to Google drive
(you can go through their documentation)

4. Create a separate folder (say 'MyGoogleDrive') in your local pc drive and mention this folder in your
desktop Google Application as folder to be synced.

5. Whenever you want to backup your files (say, at end of everyday after days work is finished),
copy them to the 'MyGoogleDrive' folder.

(Here, a more better way, if you can get a good VFP programmer, is to create a small VFP program
to copy all your data files which need to be backed up to the 'MyGoogleDrive' folder).

6. Once the files placed inside 'MyGoogleDrive' folder, your local Google application will automatically
sync them to your cloud Google Drive.

Rajesh
 
Scott said:
I had tried this about 5 years ago, and it didn't work

One thing is for sure, it didn't always work. I also found the address you enter in the process of mapping a OneDrive as a network drive is a URL containing the CID. And it compares to how you map an FTP server as a network drive. I generalized using a URL in network drive mapping automatically means binding to that via FTP. It will likely still not just be NetBios or SMB protocol used when accessing the OneDrive files, but it is more like that than like FTP, as you detect a file lock on another location.

Checking TABLEUPATE is only checking functionalities that work local, if you want to test the nature of OneDrive file access you have to test something that's detected on other PCs, too, like a file or record lock. That's the interesting tests, not working on the files themselves, Scott. And what's so interesting about a lock is that it doesn't change anything in the file itself. So no synching would transfer a lock, it's something the file system has to know and will also tell other clients, then. While data you change, is actual byte changes within the file, and nobody questions OneDrive or other cloud drives to sync file content itself.

In detail tableupdate updates a table from the buffer of changes and that buffer is a lcal thing, nothing that would concern Onedrive. The only way Tableupdate would fail is on the level of locking, what lock mechnism you choose for the cursors buffermode, pessimistic or optimistic make a change in the timing of locks, pessimistic makes actually no mentionable locks, it will lock just when writing, which are the automatic locks you always have anyway. And only the pessimistic buffermode will mean a change of a workarea isn't only buffered, that record has to be successfully locked before the buffer will accept the change. And then it again depends where the lock is noted, if it's noted remotely and other systems see it, that's when it actually works, it also woudl work in itself, if it only locks locally, it would just always suceed, even if a lock on the real remote file wouldn't. So RLOCK is the next test it has to pass, not TABLEUPDATE.

Chriss
 
@Rajesh,
That only backs data up. It doesn't make the files useable remotely, which is what the OP is trying to do.
I'm in the process of validating this for OneDrive. The first test has passed (File Locking). Now working on proof of the second, record locking, but I'm having a hard time with this, as I can't seem to get FLOCK() to return expected values even on my local copy (not running on OneDrive). So I have to figure out what I'm doing wrong in my own test to first see if I can prove the function of RLOCK() on two different machines both accessing the same file and record at the same time.


Best Regards,
Scott
MSc ISM, MIET, MASHRAE, CDCAP, CDCP, CDCS, CDCE, CTDC, CTIA, ATS, ATD

"I try to be nice, but sometimes my mouth doesn't cooperate.
 
Scott24x7 said:
Ok, so I just ran your test, and the result is contrary to your expectation.
I'm currently sitting in Chiba Japan. I mapped the OneDrive file to Z:
I started VFP, opened a data session on the table "COMPANY", with USE COMPANY EXCLUSIVE
Then using LogmeIn, I accessed my PC in Tokyo. Mapped the Onedrive the same way.
Started VFP. Tried to open the table with "USE COMPANY EXCLUSIVE" and got the expected message "Table is in use by another", indicating that the file lock IS in place even over an internet mapping.

Did you try opening it SHARED from both places simultaneously?
 
Scott,

what are you trying to test FLOCK? And have you SET EXCLUSIVE OFF in both your IDEs? Otherwise, you always try to open DBFs exclusive, and flocking on top of that is quite useless, as you already have exclusive access.

There are only subtle differences in testing FLOCK instead of EXCLUSIVE. If one computer has a DBF opened and FLOCK()ed other computers trying to USE the same DBFs get the same error 1705 File access denied.

But as you FLOCK after opening a DBF in a workarea, there is the possibility of first opening the same DBF on two computers and then doing an FLOCK on one computer, which will succeed and return .T. and an then trying an FLOCK() on the other computer, which will behave depending on your SET REPROCESS setting. If you don't limit retry to a count of attempts, it will freeze that session and not return, it'll reattempt until it gets the file lock. Usually, you'd want a prompt .F. or .T. return and thus set reprocessing to perhaps even only 1 attempt. So there's that subtle difference to trying to get a file exclusive in the first place.

Another subtle difference in the sample case I described is that two (or more, indeed) computers can have a DBF open shared and then one of them locks it with FLOCK(). In that case, nothing happens on other computers in the first place, so you can have a PC with a file lock on the DBF and another still having it open in shared mode. The topic here is what happens if you want to write to the DBF, not when you want to read from it. In comparison, an exclusive USE already fails if the file is open in shared mode on another computer, it really aims for exclusive access to the file, while FLOCK() aims for exclusive write access to the file, only.

Testing RLOCK()s may actually be simpler, at least once you know the conjunction with SET REPROCESS.

Chriss
 
Wow! You guys are just awesome. I have a lot to think about and to work through (remember, I'm not a programmer, just a user, in the olden days, there were no dBase programmers, everyone was a user, we learned how to build simple programs to get our work done).

I'm going to put some time aside to try some of these suggestions.

Thanks again. If anyone has anything to add, I would appreciate it.

Alan
 
Dan,
Yes. I can easily open the file shared in both locations. I can navigate it, and make changes, BUT, I haven't yet been able to test changing the same record on both ends at the same time.

Chris,
It's weird. I did have EXCLUSIVE set ON so I set it OFF. I got the same result.
Then I went to "Data" tab, set multilocks on, and selected Record (Pessimistic).
Then selected a record at random (let's say record 21) so I do this:
GO 21
RLOCK(RECNO())
MESSAGEBOX(FLOCK(RECNO()) => .F.
MESSAGEBOX(FLOCK()) => .T.

I get this same result both local machine and when I have the same table open remotely, and from both the computers. So I apparently am still doing something wrong to even demonstrate that FLOC() will return anything when a specific record is locked.


Best Regards,
Scott
MSc ISM, MIET, MASHRAE, CDCAP, CDCP, CDCS, CDCE, CTDC, CTIA, ATS, ATD

"I try to be nice, but sometimes my mouth doesn't cooperate.
 
Interesting discussion above, maybe because my code was from dBaseII and dBaseIII, I never really tried to change the date in any record at the same time. I always placed a rlock() on the record before the edit, and unlock afterwards.

Alan
 
Scott, FLOCK is not about record locking, it's a file lock. RLOCK is about record locking. FLOCK(RECNO()) makes no sense, it has a workarea number or alias name or as parameter, so FLOCK(RECNO()) would try to lock the DBF in the workarea number corresponding to the recno of the current record. That's wild.

I can post some test code for that later. But indeed testing FLOCK is simpler by preparing a situation manually and executing single commands from the command window. Same actually goes for RLOCK. So prepare a situation in workareas where the next step will tell you that an FLOCK or RLOCK in the other computer is visible to you, you get no FLOCK, too or you don't get an RLOCK on the same record, for example. Indeed MULTILOCKS must be set ON to do RLOCKS(), but I don't see the idea behind what you're doing, aside from the wrong usage of the FLOCK() function.

Chriss
 
Hi,

Again, I'm not a programmer, but in my VFP programs, when I am attempting to access any record to make changes, I issue a rlock(), then whatever I'm trying to do, i.e., replace, edit, etc. When I'm done, I issue an unlock(). Most of the major things that VFP does automatically lock the database anyway, such as reindex, replace all, etc. But, I usually issue a LOCK() or FLOCK() followed by an UNLOCK()when I'm done.
 
So here are some tests you can run.

1. FLOCK test 1: computer 1 (c1) opens a DBF, FLOCKs it, computer 2 (c2) tries to open the same DBF and fails with error 1705
Code
Code:
CD mappedonedrive 
Use table.dbf Shared
Messagebox(FLOCK())
expected result: .T.
Code:
CD mappedonedrive 
USE table.dbf
expected result: Error message of Error 1705 "File access denied"

2. FLOCK test 2: c1 and c2 open the same DBF. c1 then FLOCK()s first with success, c2 FLOCKS() without success
Code:
CD mappedonedrive 
Use table.dbf Shared
Then switch to c2 and do
Code:
CD mappedonedrive 
Use table.dbf Shared
Switch back to c1 and do
Code:
Set Reprocess To 1
Messagebox(FLOCK())
Expected result: .T.
Switch back to c2 and do
Code:
Set Reprocess To 1
Messagebox(FLOCK())
Expected result: .F., since c1 has the flock.

And yes, aside from Set Reprocess this is the same test code, just in the second test first both c1 and c2 open the same DBF shared and then, after both c1 an c2 have the DBF open c1 locks with FLOCK(). Notice FLOCK needs no SET MULTILOCKS ON, nor buffering of the workarea.

RLOCK() is actually quite the same, you best use it without any parameters. Its parameterization also is about a workarea in the first place, not about recnos. With no parameters, you lock the current record of the current workarea. Called with one parameter this parameter is interpreted as workareanumber, not recno. You can also lock multiple records with one call, then you need to pass in a list of record numbers, but as one parameeter, a string of the recnos. So you do RLOCK("1,2","Aliasname"). Indeed the second parameter then is the alias name, not the first. But the second parameter can't be left out for locking in the current workarea, then the "1,2" string is interpreted as alias name and you get the error "Alias not found". In very short: RLOCK() is a very quirky implementation. At best you just GO to the same recno on both c1 and c2 and then RLOCK().

3. RLOCK test:
Code:
CD mappedonedrive 
Set Multilocks On
Use table.dbf Shared
Messagebox(RLOCK())
expected result: .T.
Code:
CD mappedonedrive 
Set Multilocks On
Use table.dbf Shared
Set Reproces To 1
Messagebox(RLOCK())
expected result: .F., as c1 already has that lock.

Notice, both c1 and c2 are on record number 1 after they USE the DBF. You could ensure that with a GO 1 before the MessageBox(RLOCK())

From that, you will know enough. Because in the assumption c1 and c2 both use a local copy of the drive file both c1 and c2 succeed in locking. That was already disproved with your EXCLUSIVE use test, but now you also check that the RLOCK mechanism works, which is also used behind the scenes by many commands and functions and SQL. We then know that not only opening a file in exclusive mode is effective for other computers, but also VFPs quite unusual method of locking single records within a DBF.

You can of course do many more tests. One thing you can try in the last test after the c2 MESSAGEBOX(RLOCK()) returns .F. is browsing the DBF and trying to make a change in any field of record 1. That should cause error 109 "record is in use by another user" and the browse window cells remain unchanged.

And, of course, aside from Scott, anybody with two clients connecting to the same OneDrive as mapped drive can try this to check out this mechanism works. If it does, then it's clear that the computers work on the same cloud file. Nothing but VFP knows how to determine the record lock status and it's insane to assume that this locking information is coming over to other computers by synching of the files alone. So that will sufficiently prove that MS managed the mapping of network drives to OneDrive in a way that has the same quality as mapping a LAN share. I can imagine they create a VPN tunnel as the backbone of this working flawlessly but the details would only matter if you would try to implement a similar cloud share with your own means. I think the simple solution to that is WAN and the hardware and software setup to do that. It's just not as simple and end-user friendly as mapping a OneDrive is.

Ostrowlow,
I think exactly that is making it so desirable, no matter if the latency of data access is worse that way than with other solutions. By the way, you get this wrong, when you say it's about testing editing at the same time. It's about testing whether you actually use the same file. Imagine each computer has a local copy of the file and this is uploaded after a change. Imagine user 1 edits record 1 and user 2 record 2, you would expect both changes to be there, as they don't clash with each other, even with rlocks. The point is that simple synching will first overwrite the server file with one version, then the other. And in the end either the change of record 1 remains or the change in record 2, not both. This would mean only one user at a time can work on this without such bad side effects. It's unfortunately not showing up conflicts or errors if 2 users do work with DBFs in such a scenario. You'll only sometimes later not find your data or your changes. The important thing is not even that RLOCKS are working, you can work on DBFs without RLOCKS with few users, what this test proves on top of proving that RLOCKs work is that you actually have the cloud file in your workarea and not just a local copy of the cloud file that's synched with the OneDrive so frequently that it seems to you it's instant, for small files, at least.

Chriss
 
Hi Scott

Scott said:
@Rajesh,
That only backs data up. It doesn't make the files useable remotely, which is what the OP is trying to do.

Yes, you're correct. I'd not read the post properly.

Now, I am following this thread to know more about utilising a cloud drive service to store and read/write a VFP database.

Rajesh
 
Chris,
Ah, bummer... this test fails. Both return .T. when checking the same record lock. :(
Actually, you don't need 2 PCs to check this with. Just 2 instances of VFP running on the same box. Even in that case, I still get .T. with RLOCK(). I checked this with 2 instances of VFP running on the same box on the same table, and the first instance returns .T. and the second returns .F.

So I tried it both using 2 PCs remote, and 2 instance of VFP on the same PC running, and in both cases, this failed.

HOWEVER... not sure if all is lost. As Chris mentioned if it's making a local copy of the table while you're working against it, then this bad, and any sizeable file will be problematic.
I'll test this next. However, if it allows for updating remotely, and the table isn't copied each time you open it (it was very fast on my PC, and the table was not small, but not huge either), I'll see what kind of timing that results. But I do fear, Chris is right.


Best Regards,
Scott
MSc ISM, MIET, MASHRAE, CDCAP, CDCP, CDCS, CDCE, CTDC, CTIA, ATS, ATD

"I try to be nice, but sometimes my mouth doesn't cooperate.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top