Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

VFP 9.0 :: Application Performance Network Environment

Status
Not open for further replies.

Fernando Pereira

Programmer
Sep 11, 2020
20
PT
Hi,

I have an application in a network environment, where users have windows 10 (bits).

Since Windows 10 came, they have been complaining about performance...

For example, a table with 60 Mb can take 20 seconds just to open...
At this moment, i just open tables when i need to execute a operation (read, delete, add or update data) and then i close the table.
I open tables with USE command.

The application works reasonably well at the beginning, but with the passing of the day it starts to get slow ... Having the user to disconnect it and reconnect.

Can anyone help me?

I don't know what to do anymore ....

Thanks,
Fernando Pereira
 
Sounds a bit op's lockish

Do you know much about that?

SMB3 on Windows 10. What kind of server do you have?

Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.
 
Griff has a good point worth checking, but I never heard that a USE takes that long.

Obviously, at first, everybody would assume you're just misjudging this and not the USE is what takes long, the time could come from a query or scanning the file or whatever reads you do. Let's estimate that: A SELECT * NOFILTER, reading full 60mb in 20 seconds would mean a network bandwidth of only 24 Mbit (3 MB per second = 24 Mbit/s). I hope your LAN is today's standard 1 Gbit = 1000 Mbit. Besides that, this would not vary much. Of course, more users slow down a LAN, but the maximum number of users will be reached before noon, this does not degrade over the whole day.

You could make the final check about that measuring times of USE with SET COVERAGE. Log to a local file for each user and let the start code of the application copy the previous log to a share or do it when the application ends. You can limit this to just logging the USE lines and then you'll see. It'd be cumbersome to establish it depending on how OOP this code is, there might be a good place not directly before the individual USE and after them, as they would be very individual places in code, but start logging in a base form LOAD and already end again in a base form INIT or ACTIVATE or SHOW to catch the phase of opening tables in the DE or where you do the USE. By the way, a DE opening the tables would not contribute to the coverage log, as the log only logs code lines, the DE with its cursor objects will iterate them and open the tables, but that'll be native base behavior not getting lines in coverage. So it's good you do code USE commands for that matter to be able to make that timing visible and measurable.

Thinking what could take long for the USE itself, which does not much more than a FOPEN operation establishing a filehandle, the only mechanism that's worth noticing is table validation, SET TABLEVALIDATE tells more about that. This mainly was introduced to reduce or prevent the most frequent reason for DBFs breaking, a wrong record count saved in the header. The behavior can now be controlled quite precisely, you can decide to check the reccount at opening or closing or both times. A check does a header lock and that would be the only operation that could take longer. There's a big suspicion though, that this is not responsible for the 30 seconds time it needs. VFP does only try to lock once, not repeatedly, this will also not change with REPROCESS setting, if VFP can't lock the head it'll do a reccount check by actually checking a consistent file length with the count in the header, but that also won't take that long, as it only means determining the file size and in low-level terms this might be coming from file system information or from an FSEEK to the end of the file, both are short operations.

So even when validation at opening of a dbf is turned on, the single lock try and otherwise check without it shouldn't take long, no matter if oplocks play a role or not. Oplocks are not really locks, they are a caching mechanism acting as-if the file is exclusive, but not in real exclusive mode, any other client, eg the one for which USE takes long, would break this "exclusive" mode. I bet this will still mean the header lock fails, because even though an oplock isn't exclusive, the client having the oplock acts as if it has exclusive access and it will need to take notice and write out any changes - break the oplock - before a real lock of any kind can be made, otherwise this could lead to inconsistent behavior of the client having the table open for longer already with an oplock still thinking it can act on it as if it has exclusive access, while the new client already thinks he has a lock, a real lock - and a header lock is a file lock, it's not just a (soft) record lock. Nevertheless, VFP will only try once and not getting a lock immediately tell the system "never mind".

So again, in short: I don't think the tablevalidation would explain a long USE time, as even a header check configured to be made at USE would not reprocess until it gets a lock, it would just try once and then do an alternative check. I don't see how that could take long.

But: For the purpose of ruling tablevalidation out, you could set this to 0 for a week or so. You don't lose much, tablevalidation was introduced with VFP8 and we were living without it in VFP7 and earlier, too. It's no reason to keep it off, but once you find this to be the problem, there'd be a chance to finally figure out what's slowing this down so much. Especially in conjunction with COVERAGE logging you'd find out a lot about that, instead of letting your application become a lab for that you can of course also experiment with test code. Such measuring runs could be made over night, for example, "emulating" usage of files from multiple clients, but it's also not a big deal to do so with the full real conditions of application, LAN and users in your application, you can just generate much more measurements and test data with a separate code experiment.

If there is the slightest chance this still simply is due to reading in whole DBFs (and FPTs) you will just need to change your ways, COVERAGE will tell that, no matter if you only log form starts including the USEs but excluding the timings of query or other read/write access, you'll be able to identify or rule out the USEs. And setting TABLEVALIDATE to 0 only for some forms and not others, you can also see how that influences USE time.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Hi Griff,

I know practically nothing about op's lockish. Or SMB3 on Windows 10...

I have this problema in some clients. And these clients use, as a server:
[ul]
[li]Windows Server Standard 2007[/li]
[li]Windows Server Standard 2007[/li]
[li]Windows 10 Pro (Not Server)[/li]
[/ul]

I already had a lot of problems with Windows 10 update 1803... Data loss, problems with the database... And, after some time, i resolve with:
[ul]
[li]set-smbclientconfiguration -DirectoryCacheLifetime 0[/li]
[li]set-smbclientconfiguration -FileInfoCacheLifetime 0[/li]
[/ul]

The app in question has more than 300 customers, and 600 users ... The only thing in common is windows 10.
But performance problems have never disappeared, and they have been intensifying.
What users say is that the application starts well, and over time it gets slower ...

The code may not be perfect, but I do not believe it is there, because it all started with windows 10 .. And since then I have already made changes, some I don't even know whether good or bad ... in an attempt ..

Any ideas?
Thanks
 
I have seen USE, or any file opening operation, when Windows lets the 'owner' of a file open it first and does some kind of ops lock thing
and won't let anyone else get to it - sometimes until the owner lets the file go.

I've not seen this behaviour in a few years - so the server would probably have been a Windows NT Server from around 2000?

If the OP is talking about a modern server maybe running on a VM, you can get problems though.

Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.
 
Ok,

This can happen in one or another client, but in most clients we are talking about Windows Server 2012.
I also have clients, where there is a windows 10 Pro PC, where the database is, and the others also access by virtual network unit.

you started by talking about SMB3.
what can i do to check if it can be of this or not?

Some aspects of code:
[ul]
[li]SET REFRESH TO 0, -1[/li]
[li]SET TABLEVALIDATE TO 2[/li]
[li]I use a lot of macros[/li]
[li]I don't use Requery[/li]
[li]I don't Forms Data Envirnment[/li]
[li]I don't use Buffer Mode[/li]
[li]Access to the database is done through virtual network units.[/li]
[li]Table opening example: USE 'X:\myApp\DataBase\myTable.dbf' ALIAS 'tableX' AGAIN SHARED IN 0 [/li]
[/ul]

For network environments, any aspect that should be taken into account when programming? Or any configuration you should use?

 
Well, I can see you have already had to look at SMB then.

Server 2007 is not the latest tech, getting a bit hard to support. I would imagine there is a chance you are running fairly small hard drives, perhaps they are close to capacity or starting to fail?

What is the nature of the app?

Is the data significantly large? Have you got a well indexed design?

Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.
 
Application is an erp. my clients' main management software.

Manages from purchases, sales, inventory, production

Best Regards,
Fernando Pereira
 
I have data bases with 1.2 Gb... Yes its significantly large.

I use a lot of indexes, and rushmore says its ok... I use a lot of SEEKS and not SELECTS...

Best Regards,
Fernando Pereira
 
I presume you have a pack and reindex facility...

So we are down to Windows 10 not really liking Server 2007. Are you sure it's 2007, I know there was a 2000, then I thought it was 2003 and 2008?

Could you set up a similar installation at your own office and write a bit of test software to run on a couple of Windows 10
workstations opening a table and closing it - leaving it running for a while to see if it slows down?


Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.
 
I will try.

What recommendations do you have for client server environments?

If you have a document registration form, and where the lines are recorded as they are inserted, do you recommend that you always open the line table? Or open only once?

At this moment, when opening the document I access the table and remove the data for a cursor.
Then whenever I need to record new lines, delete, change, open the table again and make the change and close it right away.

Is this not the problem?
Often opening / closing files?

Best Regards,
Fernando Pereira
 
It worked before you had Windows10?

It's not my approach, I open tables in my form's data environment and keep them open all day unless I close the form and open a different one.

There is no reason not to do it your way though, should be safer, perhaps not as quick.

Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.
 
Could your workstations be disconnecting from the share after a timeout of some sort, if you open a table, download a subset and then close it... could Windows 10 be releasing the share after ten minutes or so and having to reconnect/authenticate when you go to use the file again?

Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.
 
Often opening and closing files gives clients an opportunity to get an oplock.

Oplocks can be turned off in server 2012 (I don't know where you see 2007 mentioned, Griff), but the other option to not get oplocks happening is multiple users having files open. So that kind of working with DBFs indeed makes them more frequent. By the way, it's not a good strategy to "turn off" poplocks by setting up one client to always have all dbfs open.It would need two extra clients, as one would have oplocks for all files which any secondary client would need to break. So no good idea. It can still be controlled in 2012, too.

I'd still like to see the timings, whatever else you do, could you set coverage for a client and extract the time a USE takes? It's not that straight forward, I know, but I'd really like to see how much there is to that.

So for example do
Code:
SET COVERAGE TO C:\logs\usetablex.log ADDITIVE
USE 'X:\myApp\DataBase\myTable.dbf' ALIAS 'tableX' AGAIN SHARED IN 0
SET COVERAGE TO

Or as suggested start logging in load and end when your code USEing tables has run, ie in Init or Activate or Show.

You then also don't need coverage profiler, a simple APPEND FROM CSV is good to see the log data. In case of putting coverage right before and after USE you will only have 2 lines in the coverage file per use, (unfortunately the SET COVERAGE TO, turning it off, also is in the log) so each odd line number of that log is an interesting time. You could also sue SECONDS() and more, but actually the measurement done is quite precise and you need less code than with anything else, too.

Reading in a log into a cursor is simple:
Code:
Create Cursor curCoverage (bExecTime B, cClass C(128), cMethod C(128), iLine I, cFile C(254), iStacklevel I)
Set Point To '.'
Append From C:\logs\usetablex.log Type CSV
And then you could also simply index bExecTime descending and look for the single lines (like USE would be) that take longest to run. Of course, it doesn't tell what in detail runs long, but I'd analyse what times you have, I guess often enough it will also be short, that's for clients which get the oplock, then it's long, for other clients waiting for the oplock to break. If you can also synchronize the timings of the logs (for example by file creation/last update time) then you could see a consistency of which client having a shot USE time causes which other client to have a long one, and whether there is a preference, a priority in the network, etc.

It would also be easy enough to extract a list of (cFile, cClass, cMethod, Line) where execution time is long. The type of code decides what is set, a PRG obviously is not a method of a class (unless you run a line in the class definition within a PRG, but this tuple is a unique location in your code. The extremes execution times might also turn out to be something else, too, if you log more than the USE commands. USE might not be your only bottleneck and time might also be spent diffusely, just by the sheer quantity of iterations, for example. Avoiding queries is no guarantee to run faster, queries can be better optimizable than your manual code can be.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Initially I had this approach, and at the time I didn't have these performance problems.

But in mid-2018, due to the heaps of problems I had caused by Windows update 1803, I changed heaps of things, until I realized the problem was:
[ul]
[li]set-smbclientconfiguration -DirectoryCacheLifetime 0[/li]
[li]set-smbclientconfiguration -FileInfoCacheLifetime 0[/li]
[/ul]

At that time I sometimes had the problem of a table, although it was in the data environment, sometimes it didn't open and then it gave an error.
How do you place the tables in the data environment? Is it by code?

Do you use buffermode on tables, which then require tableupdate?

No workstations can't be disconnecting from the share after a timeout, because i open the exe by the same network drive, to test it.

Best Regards,
Fernando Pereira
 

What about the NoDataOnLoad option?
Do you use it?

Best Regards,
Fernando Pereira
 
NoDataOnLoad only makes sense for views. You don't load data with a USE of a DBF.
And the NoDataOnLoad property of objects in the data environment also is disregarded with DBFs and only plays a role with views.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Fernando Pereira said:
I have this problema in some clients. And these clients use, as a server:
Windows Server Standard 2007
Windows Server Standard 2007
Windows 10 Pro (Not Server)

Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.
 
Fernando,

I see your questions go to Griff, but opening tables by DE is not the difference that would make him have fewer oplocks problems, it's more keeping all tables open. After three or four users are in a system, and if that is about just a bunch of DBFs anyway, nobody will have an oplock, all DBFs are opened shared and that has much more to do with keeping them open than with opening them by data environment.

You can also do the same with USE, oplocks only have a chance to cause trouble when they can be established, which means if nobody has a file open. And then the first one also has no trouble.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Okay Griff,

I see it now. With multiple customers in several network situations the problem gets harder to manage, of course. But actually 2007 server would make turning oplocks off also easier, wouldn't it?
As said, last time I posted a lengthier Microsoft technical article it described turning off oplocks for systems still supporting that and it ended with 2012, 2007 is earlier, so should work out, too.


Bye, Olaf.



Olaf Doschke Software Engineering
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top