Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Any problems running a VFP (EXE) via RDS on Server 2019 (or 2016)? 3

Status
Not open for further replies.

wtotten

IS-IT--Management
Apr 10, 2002
181
US
I have a VFP application compiled to an EXE (it was compiled with VFP 8, but I plan on compiling it in VFP 9) that I would like to move to running on a new server we will be bringing in. I am using Server 2008 R2 now and running the application in a LAN configuration (not using RDS (AKA RDP)) and even though I have Oplocks and SMB2 and SMB3 disabled, I still have issues with file corruption once in a while (usually the header has the incorrect record count). A few months ago, being that I think the problem is a result of some file caching either on the server or a workstation, I started a process where I run a re-index program I have for the application every night (it deletes all the TAGs and recreates them for every table in the app), and lo and behold no more file problems. It's not something I can keep on doing!

The customer is getting new servers in the next few months so we will be running Server 2019.

The application has mostly contained tables with a few free tables. As far as I can see my solutions to stop this file corruption issue (I can't continue to run this nightly re-index forever) are to either move a SQL back-end (lots of work and I have no experience in doing a SQL back-end for a VFP application), or run the application on the server via RDS (RDP). It is my belief that running the application directly on the server means I don't have to deal with any file caching between the workstation and the server and that should solve my need to re-index every night. Am I correct in that assumption? What I would do is run the application using a RemoteApp.

Can you share my your experiences in running a VFP application (as an EXE) via RDS on a Server 2019 system? Also, can you tell me if you are running Server 2019 as VM or as a physical machine? Are there any "gotchas" I need to know about, or any suggestions you have?

Thank you,
Bill
 
Having both data and EXE on the same computer removes the oplocks problem, yes. Even if the terminal server is not hosting the DBFs, but at least the network between the terminal and the database file server is a closer neighborhood and a faster connection you get a more stable result.

What you plan should work. I've seen trouble in details with 2008 R2 already with completely separate issues not regarding oplocks, SMB, network at all, but no blocking issue. In short, you always have some system change pains anyway.

Besides that, reindexing and packing DBFs overnight is a process I'd started and recommended to do before the oplocks issue anyway. It has separate reasons than making files more stable to corruptions. It removes bloat and lets indexes work fastest, which also is the only mechanism that makes them more stable to corruption, as the intervals with a chance for corruption can be reduced.

If the reindexing and packing take too long, it can pay to copy data to a local fast drive before packing and reindexing and then copy it back, obviously. PLanning to put application EXE to the server hosting the database files you remove that need for free, anyway.

If you get to a point where reindexing takes almost the whole night, you'll tend to hit the hard limit of 2GB per file, that's becoming the harder issue then, for the need to move. If you think it's easy to split data either horizontal or vertical think how many changes that needs.

Bye, Olaf.

Olaf Doschke Software Engineering
 
On top if that, the new server now has two roles, while local file access is fast and removes the network from the file system, this once server now has to have all the resources for the users. And once you need more than one terminal server to serve the whole user base the database on local drive is only valid for one of them. Keep that in mind. I don't know what the user numbers are, but running all user sessions in one place is a challenge, too.

Bye, Olaf.

Olaf Doschke Software Engineering
 
I run a number of VFP apps on RDS from a 2016 server, for remote users working from home during lockdown.

We set it all up in a bit of a hurry, as you can imagine, but it works well - two servers with access restricted via the users IP addresses - they have to report their IP
to a techie who opens the firewall for them if it's changed (deliberately not automatic). We have the sessions pretty much tied down so they are effectively running consoles
with two apps for each user. Updates are actually easier than they were, because we can knock the users off if we need to.

Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.
 
i have a hundred or so users via RDP.

it works well. Main issues are extra MS Office licences if you/they need office automation and the cost of the RDP licences.

Used to use separate physical machines but currently running on a VM.

TSPlus was mentioned in another thread recently

hth

n
 
Nigel,

Thank you for your answer.
Do you have the 100 users on one server? Is it Server 2016, or 2019 or ? Any suggestions on sizing the VM re: CPU and RAM? I'll be using SSD drives.
Does your application use VFP tables or does it have a SQL back-end? Mine has contained and free VFP tables.

Bill

 
GriffMG,

Thank you for your answer.
You mentioned that you have two server. Does each server have the application and the data on the server, or do you have a situation where the application and the data are on two separate servers? i'm curious if you are transmitting data across the wire between machines.
Does your application use VFP tables or does it have a SQL back-end?

Thank you,
Bill
 
Hi

The backend is still VFP dbf tables.

One server has the data and most of the users, each with their own .exe and the data is on a mapped drive within the server - so it looks like a share but is on that machine really. The other machine is set up as an 'overflow' machine mapped to the first but with less users on it. I set it up so if the first couldn't handle the no of users we could just start getting people to work through the second machine.

In practise, we haven't come near to capacity on the first machine yet. There are still people using the data in the live LAN environment as well, that was what the applications were originally designed to do.

So, to a degree the data goes over the wire from one server to the other and from LAN based clients and from RDP clients on the server with the data... in the sixteen years this has been running we have not had any corruption or a need to reindex to solve a problem... that's not to say we have never packed or reindexed, that happens whenever there is a structural change - we are sitting at versions 60 and 51 with sub versions running from A to AP I think... We do have automation of Office, Excel mostly, but with the product licenced on every desktop already it is not a cost issue.

Seems to work pretty well.

And it's given me a career changing and adapting them since 2004!

Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.
 
I can't contribute to that discussion, as the two applications I maintained on TS were having an SQL backend before remote usage of them started, and it was only used remotely from remote users, ie main office location still used LAN users.

Bye, Olaf,

Olaf Doschke Software Engineering
 
GriffMG

That you haven't had any file corruption in 16 years is surprising and encouraging (maybe you can help me solve my file corruption problem). Can you tell my what server OS you are running, what about Oplocks settings on the server (and clients), and what are your settings regarding SMB1 and SMB2 and SMB3? Do you have any other settings on the server or client to prevent file corruption?

I have Oplocks disabled on the server, on the clients and I have SMB1 enabled and SMB2 disabled on the server which is running Server 2008 R2. Most of the users are running it in a LAN configuration.

Whatever you're doing I would REALLY like to know as it would solve an age-old problems that I have been having. I suspect it's something to do with the file handles, or client-side caching. If I re-index every night, the problem goes away (which I think is releasing file handles or flushing any cached data or ??). But if I don't I can run weeks and have no problems and then boom a file gets whacked (or it can happen every other day if I don't re-index).

I look forward to hearing what your settings and configuration is. Fingers crossed that maybe you are my angel on this!

Bill
 
wtotten said:
I suspect it's something to do with the filehandles, or client-side caching

You are aware oplocks ARE a caching mechanism? It's only called blocks, or opportunistic lock, as it is named after making opportunistic use of a situation where one user uses a file alone (shared, not exclusive or locked) and does not need to write back changes as long as others don't access it. Thus as long as that happens the only client using it uses its local cache only. That's what speed up things for the sole client using a file. It's great for office documents with only one concurrent editor, anyway. Not great for shared data files.

The server puts an oplock on that file, not a real lock, any user may demand access to the file, still, and get it. If another user wants to access the file, the oplock is broken by that demand, now the client has to commit the cache before the second user can get access.

And there are many possibilities that can break this mechanism and I'm sparing any details and assumptions, but misinformation now can lead to conflicts. The situation is worse for the case of file parts and triples, actually, DBF and CDX have to be in sync to work correctly, not to talk of FPT, the oplocks mechanism doesn't know about that, it treats each file separately. This dependency is a black box to it.

You can be lucky oplock errors don't hit you. What also prevents them is shared file use, not only potentially shared file use but actually happening shared use, as it removes the situation an oplock can be put on a file by the server. So a busy application with many users may not encounter the problem as always at least two users have the same table open. Other files may never cause a case of real shared access.

The typical report about such problems usually involves an early bird user, the employees coming to the office before anyone else, there you have your oplocks situation. If you don't have the oplocks situation you can also circumvent any problem. In theory, the mechanism should work anyway, the worst-case scenario is that frequent oplocks because of short periods of single-user access cause more overhead than when the single-user client would still directly commit any changes to the files, so they stay current on the server. Thus slowing things down, but never breaking them.

And your programming con contribute to that, often closing files. And slow access times through index or memo bloat cause more chance for concurrent usage and this corruptions, that's why nightly pack/reindex helps.

Bye, Olaf.

Olaf Doschke Software Engineering
 
The main server, the first one running LAN clients, RDS clients etc. is Server 2012R2.

Pretty sure the LAN clients would all be Windows 10 now, no adjustments for op locks these days, not on the clients.

That server has only SMB1 enabled (just checked) but op locks are enabled as both client and server (get-smbserverconfiguration and get-smbclientconfigurationsays so).

One thing that might be to our advantage is the use of 'scanners' running on the servers constantly - these would normally open (and take ownership) of
most of the files that the application uses BEFORE any other client had a chance to exercise an op lock scenario. The scanners do a lot of admin and reporting
so they are running most of the time (they process orders as delivered after a time and send reminders via email to staff to look at insurance details stuff like that).

The other server is 2019 and that has SMB1 and 2 enabled, with op locks allowed as well.

Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.
 
Separate answer on one specific aspect: If you have oplocks off, really off, and you get corruptions when you don't reindex, you have other problems. There was a time before oplocks existed and in that time dbf file corruptions also were known, so don't put the blame on oplocks alone.

The best strategies to avoid corruption are aiming to avoid concurrent write access overall, and all these things actually are the best ground for oplocks.

Bye, Olaf.

Olaf Doschke Software Engineering
 
I've asked the techie who looks after the client PCs whether he disables op locks, I don't believe he does.

Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.
 
all my current RDP users are using a MariaDB backend.

Once upon a time there were many hundreds using RDP with dbf tables on several physical servers. The secret there was to have the data on a LOCAL drive (local to windows) rather than a mapped network drive. Then all the oplock problems go away.

Because new users were added individually over time we monitored performance and rolled out a new server when required... rather than sized in advance.

hth

n
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top