Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chris Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

VFP 9.0 Abysmally slow performance on windows 2008 server 4

Status
Not open for further replies.

dkean4

Programmer
Feb 15, 2015
282
US
Three years ago I installed a little utility for a local company and testing it in a single user environment it was performing slow, but it was workable. When I installed the final version and tested it on 2 terminals concurrently, it slowed down so much, when accessing the DB tables that it became unusable. I read up on that, some, but I never had a chance to fix that problem. It is well known, I'm sure... something about double caching on the newer servers, which needs to be disabled etc. Needless to say, I lost that customer, though my utility was superbly fit for the task.

I am about to do another install, for another app, in a multi-user environment. Any advice on how to tame MS servers, would be appreciated. I am an old time Novel CNE. Is Novell still alive? Are there better servers, than the MS latest incarnations, fit for VFP 9.0?

Let me add that I was using the native DB, not the SQL DB part of VFP, in that app.

Dennis
 
For maximum speed, having the correct index tags is normally most important.
 
What would be a correct tag as opposed to one that is not correct?
 
Index tags must match exactly the expressions you use for selecting the data. And seldomly used index tags should be removed, since they will make the index file bigger, and thus slower to read. Since there are many ways to get the same result in VFP, giving you a short answer to a short question isn't so easy. A quick Google search for Rushmore Optimization will give you a good starting point, for instance
Please show some of your slow code, and we can hopefully tell you how to speed them up.
 
First, thank you for your replies. Second, there is no slow code in it. I know how to optimize my indexes and my code. This happens only in situation where the utility is installed on a server and accessed from terminals on the network. When the app is installed locally on the terminal it flies! Install it on a server, be it 2005, 2008, 2012 it dies. And I tried it on many different servers with the same exact result. It is not the app.

 
The point is to reduce the traffic over the wire, and that's done by optimizing. Try with fewer records, and see for yourself.

But in order to help you, you must provide some code, so that we know how you work. And note that I am not talking about slow code, but code which isn't able to get the data quickly from the server.
 
A couple of other things to check:

- Even though the data is shared from a server, you should install the executable program (including all the runtime files) on each user's local hard drive. Doing this can bring about a big improvement in performance.

- Check your anti-virus program. If the AV is configured to scan DBFs, CDXs, etc. whenever those files are opened, then that will slow things down quite a lot. In general, there is no need for an anti-virus to scan data files.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
With direct binding there is no code to speak about for transferring data. There are no Memo/General fields in those tables. There is nothing heavy for the traffic. And when you move the app installation from the terminal to the server it slows down. When two people access the same data, it nearly dies.

It has nothing to do with the code. Make a few grids on a form which are wired to several tables, without any code other than a start.prg file needed to call up the first form. And test it on the terminal and on the server. Compare the speeds and it will be a lot slower on the server install. There are hundreds of companies which had a perfectly working VFP apps in the 90s and with today's MS servers are experiencing abysmally slow performance. I worked with one of them last year. Same problem as I had. And that is why I had to switch my career to UI Developer, an abysmally slow process of development, compared to VFP.
 
No code = no help. Sorry, I am not a mind reader. I tried to ask for necessary info from you, but in vain.
 
Mike,

Yes, you hit the right nerve about installing it on the terminal side and let only data travel across. Come to think of it, three years ago I actually did that. I gave the customer the install on a pen drive and all the registration of DLLs was there. Nevertheless you are correct about that. However, it did not seem to help

I read, a while back that VFP installs trigger double and triple cashing of data on the server and there is some parameter on the server side which needs to be set to dismiss it. Unfortunately, I gave up on VFP after a gargantuan effort to adjust this problem.

Thew thing about Virus protection, I believe that may have been part of the problem. Good call, Mike. The Admin was a nut job getting in my way and I think that he installed something on the server for continuous packet inspection. I am going to install an MS server in a few weeks and do this experiment again in my own environment.
 
tbleken,

I did not mean to upset you. I am a nut job on performance. I am one of the few FoxPro developers who has used the LOAD command from the early versions to accelerate performance with assembly language... anything to get an edge. Also, I do not have the code at hand. I have to dig it up. I was looking for a general solution. I think that Mike hit it on the head. And there is also something to the new incarnations of servers put out by MS. I will try to find the article and introduce it.

Thank you kindly for the effort. Next time I will prepare some code before asking the question.

Regards,

Dennis
 
Many years ago I recall Norton was the poster child for AV churning. Surely alternatives are built into that app now. Yes, exclude the data tables and indexes from scanning.

If that doesn't help, then there are many threads on multi-user optimistic/pessimistic table buffering/locking, etc. But that's a different world of confusion (for me), determine first if aggressive AV is the issue.
 
Sounds like opportunistic file locking to me

Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.
 
I knew there was another term on the tip of my tongue fingers... opportunistic. Every time I dig into the details my eyes glaze over, probably because I'm just reading and don't actually use them.
 
...single user environment it was performing slow, but it was workable.
That in itself should be a big clue. Apps I have developed and installed locally in a single use environment have usually performed plenty adequately if not screamed, no matter how much they were pounding on tables. So before digging too deep into server optimization or issues, I think the code, indexes, and logic may need to be re-examined.


-Dave Summers-
[cheers]
Even more Fox stuff at:
 
Indeed opportunistic locking rather should be named greedy caching and only single users profit from it. It has the nature you describe.

Simplified description: If a user on client A is having the only file handle on a file, he gets an opportunistic lock, which grants exclusive access under the condition that this lock can be broken by any second user also needing access, so it's not a real lock, it'll break when needed. Until that time, all data is cached at the client computer of the opportunisitc locking user. And so at a moment the lock is broken by a request of client B, the file server is serving an old version of the file and needs to ask the client A for changes. The server can't know what portions of the file have changed, so it becomes a relay server to the client A, until all changed blocks are committed and the real file server is up to date again. That is not perfectly working. You can imagine this needs a very stable LAN and in a star topology network the path from a second client B to the file becomes twice as long, as it goes from client B to server to client A. The file server is still responsible for the file, but when getting a read or write request from a second client B, the file server depends on the client A to be responsive to the request to end the lock. If the client is unresponsive, alone that can take extensive time, before the second client gets a first bit of the requested file. Additional to that, all switches and firewalls are not acting ideal for the client A to act as a file server. All the bad properties of a peer to peer LAN are in effect now.

Even if you have write through caching on, client A doesn't write to the dbf file, all changes are cached, even those of END TRANSACTION, INSERT-SQL, UPDATE-SQL, or TABLEUPDATEs. The load on the network is 0, as long as client A is the only one accessing the file. This way It's a quite risky type of caching, too. If the network has outages, you don't ensure data integrity. It can pay with caching documents, when you seldom expect a second user. I still wait for the day, we can configure it per file type.

Many things can go wrong, if file server and client A get out of sync and each have different ideas about the lock status. The file server might think being up to date and serves the old file state. In that situation it's fast but wrong!

The big difference of your setup, Dennis, seems to be installations on a terminal server are having EXE and DBFs on that machine only, users have sessions on it, but that's fine with enough RAM. That's always excluding the LAN component and the file SMB protocol handling the opportunistic lock between EXE and DBFs, the only LAN access is from clients to the terminal server, and that doesn't involve file protocols, it's about the transfer of graphics or the gdiplus or direct 2D commands to redraw them in the same way they were drawn at the server side. That involves a whole different set of protocols like RDP not involved in that caching/locking scheme at all.

Bye, Olaf.
 
Olaf,

As usual, you did some good thinking about this. Let me mention this, as a contrasting idea from an age that passed. I started peddling FoxBase apps from the day it came out. And I was a Novell CNE in those days. In 1987 I installed a massive application for a claims review company in Manhattan, with about 30 terminals and I used all the graphics I could squeeze into FoxPro 2.5. So, it was not just ASCII thrashing. Plenty of images were thrashing in all directions. I also installed 3 FAX servers and many print servers on that system. Anyone could send faxes from the terminals and print on any printer in the office. The speed I achieved with that Novell 3.5 or so system was astounding. In NYC I was visited on that customer site by many main frame guys and they refused to believe that this was possible. I would run reports on every terminal, just like Novel would display their OS on shows. The terminals stuttered a bit like Novell's demo but the report pages were flying.

Once MS bought out FoxPro and VFP 3.0 came out, I have never been able to get anywhere near that performance. If I could get 1/5 that performance, I would feel good and shout. Is it the TCP/IP protocol, is it MS redesign of FoxPro? (MS likes to bloat everything until it becomes useless.) I know not. But, not being able to get any significant performance has killed my career. Speed was my selling point.

My follow up question for you is an obvious one. Can VFP be made to perform well on a LAN? I have not seen one to date! I love the MS added OOPS of VFP and the myriad features which transformed FP into VFP. The true inheritance and rapid development, compared to the WEB tools puts VFP in a class of its own. I currently am learning EXT.js, and though EXT.js is a far cry from the Javascript and the flea market of framesets... jQuery, BackBone, angular the flexibility of VFP puts it into a class of its own. I run into customer after customer who complain about VFP performance. And each one is truly abysmal. Is there hope for getting good performance with VFP, in this day of processors which puts the 386 machines, of 1987, into the stone age? If all things were equal, VFP should be flying to Mars and back in one minute...

Thank you for your great response... by the way...
 
He means antivirus, it might be scanning the dbfs and what not before allowing access.

It sounds more like op locks to me, one person opens the files and the others are effectively locked out
because M$ made a pigs ear of implementing a feature that only existed to make Windozeseem as fast as
Netware back in the days when everyone was going token ring...

Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.
 
GriffMG,

I think that you answered my question in part. Novell was fast and far more specialized than MS trying to service everything and everyone by bloating and bloating until the tower of Babel becomes a behemoth and useless.

Do you know if Nevell could support VFP these days? Is Novell still alive?

Thanks for the heads up about AV... Don't know what I was thinking...

Dennis
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top