Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

An executable file on a local network

Status
Not open for further replies.

ameedoo3000

IS-IT--Management
Sep 20, 2016
222
0
0
EG
hi all
Is it possible to have work on the Visual Fox Pro project through a local network such as Access
The question may be naive but there are some errors that have emerged in this experience so much so I thought it was impossible to work on the same executable file through the local network!
and What are the reasons why I do not work on the same executable file for the Visual Fox Pro project through the local network?
Please inform me
 
VFP opens PJX files exclusive, no matter what setting you have in general for exclusive data access.

In regard to storing an EXE on a share and starting it there. That's what often is used and often causes problems, which mainly resolves if using a cmd file to copy the EXE local and starting it there.

Whatever question you had, this should resolve both.

The comparison with Access suggests you don't really use VFP as IDE to produce EXEs, but you should. Source code projects are not for sharing among users as Access databases are, VFP is an IDE for a developer and while you may share as much source code as you like, the VFP9.exe is not among the files you're aöllowed to redistribute to others. You're having an IDE that's capable to build EXEcutables and DLLs. These are for redistribution to users, not the IDE itself.

Bye, Olaf.



Olaf Doschke Software Engineering
 
Sorry, do I understand that the participation of the work is only through the tables and not the executive file of the project?
 
You share the EXE by installing it on all participating clients. They use the same DBFs shared. Data can be on a network share. How you program for shared access is described here: [URL unfurl="true"]https://docs.microsoft.com/en-us/previous-versions/visualstudio/foxpro/h6hhascz(v%3dvs.80)[/url]

Indeed you can start an EXE multiple times, but that already has a hurdle creating and recreating a foxuser.dbf and that'll be the first thing you stumble upon.

To make maintenance and releasing a new version easier you put an EXE in a share but only to copy it to local via a start script.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Here's a start.cmd script I recommended in thread184-1762130

Code:
c:
cd apps\myapp
IF EXIST myapp.exe ren myapp.exe myapp.ex_
robocopy /MIR \\share\apps\myapp\ c:\apps\myapp
IF EXIST myapp.ex_ ren myapp.ex_ myapp.exe
start /b myapp.exe

If you manage to educate your users to not start the EXE in the network share and/or add code so the EXE exits itself if it detects SYS(16,0) is not the local drive but a network share, then you can also leave out the step to rename the extension from EXE to EX_ when releasing and before robocopy update and from EX_ to EXE when starting.

Notice using the EX_ vs EXE mechanism you'll want robocopy to compare the network share EX_ to the local executable that was already renamed to EXE after it was copied over. Therefore [tt]IF EXIST myapp.exe ren myapp.exe myapp.ex_[/tt] ensures robocopy compares your local EXE re-renamed to EX_ with the network share EX_ and just copies it over if there is a new version.

The downside of this may be antivirus getting suspicious about files renamed with executable file extension. So if you have code within your EXE ensuring it's only running from whatever application directory you want to enforce, then the start script can be easier:

Code:
c:
cd apps\myapp
robocopy /MIR \\share\apps\myapp\ c:\apps\myapp
start /b myapp.exe

Start code in that EXE might be
Code:
If NOT Lower(sys(16,0))="c:\apps\myapp\myapp.exe"
   QUIT
endif

It still poses one problem: Users starting the EXE in the network share create a foxuser.dbf there, that is copied to all other clients. I'd recommend a config.fpw with RESOURCE=OFF anyway, so you don't have a foxuser.dbf unless you use it for aspects like defining the print preview toolbar, etc. But then there is really no good way to protect the local foxuser from being overridden, it has to be readwrite to be able to work with it, setting it readonly to protect it from being overwritten doesn't help.

Bye, Olaf.

Olaf Doschke Software Engineering
 
By the way; Even if you won't have trouble with an EXE started from the share directly, the one problem you have with sharing an executable with multiple users that way is, that you can only upgrade the EXE with a new version if all users exit the application. And you'll find users leaving their workstation on and keeping applications running.

Bye, Olaf.

Olaf Doschke Software Engineering
 
I might add that if you are unfamiliar with how to build a VFP EXE, you might want to spend some time looking at the free, on-line, tutorial videos at:
Most especially those titles: Building a Simple Application

Good Luck
JRB-Bldr
 
Now everything is fine
Thank you very much ..
I had another question .. How to make the data update fast in the network
 
You can't store as fast as on local HDD, What's your LAN bandwidth? I assume today's 1 Gbps standard.
And how many users share it and what other LAN traffic goes through there?
And how fast are the file share HDD? Do you use SSD?

That's what you can do on the hardware side. Use fast drives and network hardware.

On the software side, you can index your data ideally for your queries, that's "a science in itself", but no index accelerates writes! Every index needs to be written and updated when you insert or update data. There's no reason to avoid indexes because of that because read operations are much more frequent and so writing the indexes is a benefit for the whole speed. The fasters users read, the faster bandwidth also is available for the write operations.

But you can't force your data up the wire into the DBF faster than the combination of network speed and HDD speed allow on top of needing the CPU resource of the file server to care for your write operation, too. The limiting factor most often is the LAN, you would need very strange situations of concurrent updates of very many tables sitting in far apart HDD sectors of a traditional platter drive so writing is slowed down, but even in these drives write operations will then be queued and handled better than just in the chronological order they come in.

If you want shared data you have no choice other than LAN.

What write times are you experiencing in what hardware environment? The question is whether you really just have bad hardware or illusionary expectations despite having a modern network backbone and file server.

Using another backend, MSSQL Server means another balance of resource needs. Data still has to travel the network, that aspect doesn't get improved by SQL Server. In write operations a client is done when sending over the SQL Insert/Update statements, the time the server needs to write this out are not the clients concern anymore, but that service needs more CPU resources on the server side, more RAM to cope with databases caching and transactions and more HDD for having transaction logs VFP DBFs don't write. This can get faster but doesn't necessarily become faster.

What you'll never get is the write speeds of local writes, but that doesn't allow sharing the data, does it? If you make one client the file server, only that client will have the advantage of writing faster. And you pay that with a high price as clients are not good at being file servers.

If you're using a NAS device attached to some router in your network, double check the effective bandwidth isn't down at rates of poor WLAN because clients don't use cable connections or the NAS itself only is attached with 100Mbit, that all should be at the normal technology levels of today.

Bye, Olaf

Olaf Doschke Software Engineering
 
There are further details: Indexes do accelerate writing back views and remote data because in that case, it means finding the original records via primary key and the index on that matters.

But indexes are already mentioned. Where that plays no role is when you buffer a dbf and commit changes with TableUpdate(), that doesn't cause VFP to write back changes with SQL it generates, you're having the DBF open you write back to and VFP acts directly on the DBF file. That differs when view cursors are a copy of data queried, changed and new view data is forwarded to DBFs by relating back to the DBF records via primary keys.

That knowledge doesn't change your strategy, it just underlines the importance of primary keys. even in tables without related tables needing a foreign key for relationships. The primary key is the key to get back to where data came from.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top