Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

VFP 9.0 Abysmally slow performance on windows 2008 server 4

Status
Not open for further replies.

dkean4

Programmer
Feb 15, 2015
282
US
Three years ago I installed a little utility for a local company and testing it in a single user environment it was performing slow, but it was workable. When I installed the final version and tested it on 2 terminals concurrently, it slowed down so much, when accessing the DB tables that it became unusable. I read up on that, some, but I never had a chance to fix that problem. It is well known, I'm sure... something about double caching on the newer servers, which needs to be disabled etc. Needless to say, I lost that customer, though my utility was superbly fit for the task.

I am about to do another install, for another app, in a multi-user environment. Any advice on how to tame MS servers, would be appreciated. I am an old time Novel CNE. Is Novell still alive? Are there better servers, than the MS latest incarnations, fit for VFP 9.0?

Let me add that I was using the native DB, not the SQL DB part of VFP, in that app.

Dennis
 
GriffMG,

If novell is part of Micro Focus, That site takes 10 minutes to load. Bye bye Novell... Probably hosted from a home server...

DK
 
MikeLewis,

"although you will of course need to test my solution before you can be sure it is the culprit. It will be interesting to hear what you discover."

I just bought a PC to make the server and have an MS 2008 R2 server DVD, which I have been holding on to, for too long. It will take me some time. I have to solve this issue once and for all. Love VFP too much...

Dennis

(No aphorism at this time)

 
Dennis,

A few years ago a client hired me to perform a performance audit on their VFP application, which was running very slowly. They had considered all the obvious causes, including an over-agressive anti-virus, oplocks on the server etc. Before I could start work, their hardware guy asked me to hold off until after he had installed a network-attached drive (NAS).

He went ahead with the NAS, and transferred the VFP app to it. Suddenly, the problem disappeared. The app started running at a very good pace, and the client was delighted.

I hesitate to recommend this solution in your case, because, to be honest, I don't understand why a NAS should make such a difference to performance. I can only say it worked for my client, and it might work for others.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
NAS drives tend to be very good at the thing they were designed to do, server files, with no bells and whistles.

Most of them run a Linux/Unix derivation, rather than a M$ Windoze one

Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.
 
MikeLewis,

Good to hear that. Thanks for that tip, MikeLewis. It is cheap enough to do a test. Do you know which NAS... brand? I have bought NAS systems before and some were really dis-functional, years ago.

I need to find a solution to this nagging problem. M$ is very strategic and they know how to dislodge their competition. I was just reading about Novell and a great OS is now gone. So we are left with bloatware. Today's processors are monumental (I come from Digital Design...microcode etc... world) and Windows still crawls like a Grandma with a broken leg, using 64 bit processors and massive RAM. I finally upgraded all my PCs with SSD drives and they perform better, but knowing about the hardware inside, it is hard to believe that we still feel like we are riding the original Ford.

Great tip... Merci!


Dennis
 
When I have encountered this very same problem the issue was always the transfer rate from a workstation to the server. Does you VFP app run well when run directly on the server? If it does there are plenty of fixes to this if you Google "slow server to workstation 2008".

 
As far as I remember, the NAS was a Buffalo, but I don't know the model number. I think Griff was right when he said that it was running a Linux/Unix derivative. I do know that there was no problem with VFP's file- and record-locking. We did very careful tests of that before committing to the drive.

The footnote to my story is that I decided I couldn't in all conscience submit an invoice for the job.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
IAmTheWolf,

In order to answer you I now need to build my MS 2008 R2 server. I am no longer working with clients on these problems, so I cannot answer you with certitude. Kind of gave it up, back a few years. But I will have the server up soon. Then I will be able to do all the testing I need. I need to be able to solve this problem next time I encounter it.

I did not realize how helpful you guys are. StackOverflow has nothing on you guys. But, wait, they do...they do have bastards who enjoy shutting down a question to get personal points and leave folks stranded. Ha ha ha...

All you guys Rock, totally. Best forum by far...

Dennis
 
Just for fun let me add a rant not to be taken too serious about MS:

I remember not so long ago MS even had some TV commercials of Internet Explorer 9 - see That browser WAS really fast at that time, though you'll find lots of "Honest" versions and parodies of this commercial...

Now the not so serious picking at MS: It seems the more security issues are found in a MS product and the more fixes it has from the teams caring about security, the less pure the underlying concept becomes and the more sluggish things become.

Obviously I'm not an insider really knowing what happened to IE9, but history seems to repeat. Today on a newer Windows 10 (Home) PC with Edge browser, this again is a very fast browser, but I'd not be surpirsed if it wears off over time. Other devlopment teams manage to keep their browser up to date and fast at the same time, I also detect some sites working best with some browser. Obvious combinations: Any google service works best with Chrome. MSDN and other MS related sites work best in IE (as sluggish as it gets with the rest of the web) and Firefox seldom disappoints in being very picky about web standards, so I often test web development on Firefox.

Bye, Olaf.
 
Yeah, I worked for years on FireFox. Firebug has some very nice features in the swarm. But Chrome's debugger holds its own, lately. IE, however, their debugger looks crude at times and seems to have many limitations to me.

Large companies tend to bloat their products because they want to lead. So, hurry up and get it out, before everything is congealed.


Dennis
 
About DBF on a LAN:

The fastest way to work with DBFs always has been exclusive access, single user locally stored DBFs. The mechanisms of the file protocols VFP depends on for manualy and automatic locks always have been a dependency not good for multi user. I moved from using DBFs as multi user backend in 2008 for major applications I work on and there are only two backends at that company, which stayed DBFs. VFP can play a good client for MSSQL via Cursoradapters. That's how I do the fox today.

The best mechnism you can use to keep LAN effects at a minimum is optimistic table buffering, which actually means no locks until commiting buffered changes. The reference to learn why is Andy Krameks article on this, for example as posted here:
The problem with oplocks is as said, it's rather a caching than a buffering or locking mechanism. Table buffering already makes VFP keep changes local until a user either saves/commits (TABLEUPDATE) or cancels/reverts (TABLEREVERT). That way VFP itself does not change the DBF up to that point anyway. The big issue is, when you DO submit changes they should go straight to the DBF to be available rightaway for others and that's not the case anymore. In that way oplocks are like table buffering even not writing when data is committed and waiting with that write access until there is no way around it anymore. Just in time is an elegant concept, but not with data.

Since the major load of any normal database is at read access of data, writes are not the LAN access to prevent and delay until absolutely unavoidable, the earlier you write, the earlier data can be seen by others. You can cahce all read access in many layers and levels, but no write access should be delayed more than necessary.

Bye, Olaf.
 
MikeLewis,

"As far as I remember, the NAS was a Buffalo"

Thanks for the name of the NAS I am all over the web looking to order one. The prices range widely. I don't really know what to order, but I'm exploring the features and see what fits the bill. It will likely take me most of the afternoon!

Thanks so much.

Dennis
 
Top selling NAS variants for their rich features here in germany (maybe not only here, as I google them) are Synology and QNAP NAS systems. But that's a more general recommendation not necessarily working best on DBFs. It's very likely the reasoning is a Linux Samba server is working at the other end. For that reason you could also take a Linux Box as a server, but obviously a NAS takes away the task to install and maintain a Linux system.

Bye, Olaf.
 
not sure how relevant this is but....

I have a product (written in VFP and using native VFP dbfs) at about 700 customer sites. Various configurations - from single user windows XP to 20 seat windows server to remoteapp (hosted on our own servers).

Around last September i started noticing many customers reporting performance issues with windows server setups. Record locking in particular and many getting 'error reading file r:xxx.dbf' messages several times a day. The more seats and the busier the customer the more frequent the problem. (i.e. my biggest and most important customers were worst affected)

At first i blamed networks but it was too widespread and each network checked out just fine. Then noticed it was even happening on our own servers to the remoteapp setups (the setup at the time was the customer's remoteapp was installed on one of 6 terminal servers with everyone accessing a private folder on a shared network NAS drive for their data).

My thinking was (and still is) a windows update had done something. Tried all the usual opp lock and SMB settings but to no avail. And of course MS no longer support VFP so no point whining to them.

For our remoteapp users i have now replaced the shared NAS drive with a large locally installed drive on each terminal server. Wrote a test program to compare performance and it's exponentially better with the local drives.

Needless to say i'm condensing all this and the various stages took months.

I'm now about 20% of the way through migrating to SQL Server (oh joy). Probably should have done that years ago but it's a huge program and .......


n
 
Olaf,

I may be off base here, but I see one ambiguity arising in the Client server SQL model.

1. Client Olaf fetches a record
2. Client Dennis fetches that same record makes an adjustment and UPDATEs it to the table before Olaf.
3. Client Olaf makes a change on the same spot in the record, but his data is different. He updates it and subsequently overwrites what client Dennis has done.

But Olaf never saw Dennis's update with Olaf's own preceding fetched data. And I encounter this all the time in the SQL model. Users are constantly telling me "I changed that value to xxxx!"

Maybe I am missing something. I see that as a disaster model.

Just to be clear, I have worked with SQL since it first appeared on VFP. I am not holding back on SQL. But I do like DBFs because I thought that it prevents this problem above... your post seems to say that it creates it too!

Dennis
 
You oversee you can detect update conflicts. Especially with TABLEUPDATE() not using the lForce=.T. option, the change of Dennis is seen, the Tableupdate "fails" on such rows and Olaf can be warned or asked to change.

You can in detail see at CURVAL() to see Dennis value, OLDVAL() is showing Olaf what he initially loaded and the Cursor/Buffer of course holds his edit. And that also works with an SQL Backend, not only with DBFs. And that way you could even merge changes, as long as they occured on untouched fields.

Such editing collisions point out organisational flaws: Why are two users having an application role writing to same data (same record)? Data could be organised by lastname A-F, G-L, so you can have less collisions. Records can be split 1:1 into fields playing a role for each application role, so Olaf and Dennis only act on "their" half of the record.

thread184-1643905 has some sample code demonstrating the principle, though the thread mainly is about refresh.

Bye, Olaf.
 
Olaf,

I love your answer, as always! What is mind blowing is that large companies like Sungard, At&T, Fidelity and others did not implement these features when I was working with them. They just let it slide. I raised the issue and it was dismissed without much regard.

Sungard is a VFP based shop. AT&T and Fidelity are not...

 
It is indeed a normal thing in client/server you query data as a copy and work on it to later commit changes without look at any overwrite of intermediately changed data with old values.

One thing possible would be composing an UPDATE only updating changed fields and so edits of other users in untouched fields are not overwritten with the OLDVAL, just because the later save was from the older session.

On the other side not committing full records can lead to all kind of table rule violations not detectable at any client, eg about a timespan in two fields start/end. Dennis pushes start towards end, Olaf pushes end towards start. Both changes don't overlap in fields, but now can lead to a record where newstart>newend, even though each single edit applies to the rule, ie oldstart<newend and newstart<oldend.

Anything you do has pros and cons. The ideal solution really means pulling in changes to a user session, when another user edits, so the edit of Dennis can be seen by Olaf, and he can't push the end any further than to the newstart. But the best solution really is avoiding such editing of same rows by splitting by responsibility or by role.

A live view of data also has it's limitations. Websockets and Ajax make such things possible again in web applications, you'll perhaps sooner or later see the day a new post is adding to the loaded thread, when you submit, instead of reloading the whole page after the submit. It's in testing. But since each user only is responsible for his posts, you'll surely not see a live change of posts, when an author edits it. That also is a cost you don't want to have, not only a bandwidth cost, but a maintainance cost of overcomplicated code needed for such syncing. It is available and makes sense in collaboration on texts, This can for example be done in Google Docs.

Bye, Olaf.
 
The point about optimistic locking - and the conflicts that arise from it - is really a business issue, not a technical one.

What does an airline do when Client Olaf and Clint Dennis both try to book the last seat on the same flight at the same time? Do they accept both bookings, and hope that they will be able to "bump" one of them on the day? Do they turn round to one of the passengers and say, "We know you have just spent ten minutes going through our booking procedures and giving us your payment details, but somebody just got in there before you. Tough."

Or do they adopt pessimistic locking - so when Dennis goes to the airline's site, they say, "You can't start booking your seat for a while, because Olaf is busy booking his. Wait until he's finished, then we'll see how things stand."

You see what I mean when I say it is a business decision. There's no perfect solution, but you choose the course of action that gives the least worst result for your business.

And, of course, the same sort of thing happened in the old days, when the airlines used manual booking systems, with agents having to phone a central office to make a booking.

Mike


__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top