Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

VFP 9.0 :: Application Performance Network Environment

Status
Not open for further replies.

Fernando Pereira

Programmer
Sep 11, 2020
20
PT
Hi,

I have an application in a network environment, where users have windows 10 (bits).

Since Windows 10 came, they have been complaining about performance...

For example, a table with 60 Mb can take 20 seconds just to open...
At this moment, i just open tables when i need to execute a operation (read, delete, add or update data) and then i close the table.
I open tables with USE command.

The application works reasonably well at the beginning, but with the passing of the day it starts to get slow ... Having the user to disconnect it and reconnect.

Can anyone help me?

I don't know what to do anymore ....

Thanks,
Fernando Pereira
 
Ok Olaf,

But on the other hand if it is always opening and closing tables, can it also cause a problem of correct performance?

I was also not clear, sorry. In the same way I can open and close many times, depending on the user's operations.
Wouldn't it be better to open it once, and just close it when the form is closed?

Could part of the problem be here? In this approach?

Best Regards,
Fernando Pereira
 
Yes, that's what I already said here:

myself said:
Often opening and closing files gives clients an opportunity to get an oplock.
And so users will get them. And they are even not the ones seeing a problem, it's always the next one.


But these problems then fade out if always two or more have a file open, especially the third one sees no problem again. The second one has already broken the oplock of the first user, he had a long wait time, but then this phase is over and from then on all users work shared without oplocks and without the handling and timing problem of breaking an oplock.

This has little to do with connections. Whether a mapped drive or UNC path resolving is taking time to connect you again is a separate issue about file access and about your authentication and permission/access to files. Anyway, yes, keeping files open is helpful, perhaps, read back, I told you the conditions. Any single user of a file has no problem, any group of 3 or more constantly having a file handle also remove the oplocks problem. But when you tidy up you often have to scenario of a second user following a first user. Especially when they both phone with each other to look at the same data, which I guess is not rarely happening an any work process, I know users.

Griff also has no real control about that 3 user condition by just keeping files open, but they are more likely open by many. When oplcoks can't occur they can't cause problems, of course.

But then, it's not a DE that keeps files open, it's simply not closing it. So you can stay with your USEs, just wait with closing until form end. The form closing will also close files when you don't open them with a DE, so also no reason for one. Indeed forms without a private data session will not close tables when such forms close, the datasession that already existed before they started still exist when they close and so tables are kept open unless you configure the DE to then also close its files. But it's no good reason at all to do everything in default datasession 1. Also notice, datasession and DE are a different thing.

A DE closes tables it opened with form closing. And as far as I understand Griff he also doesn't prevent that, it's just when users work in one form for a long time all the tables used within stay open all the time.

Bye, Olaf.

Olaf Doschke Software Engineering
 
I don't think there was a server 2007... The sequence runs NT Server, 2000, 2008, 2012, 2016 and 2019

Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.
 
Well, tell that Fernando [rofl2]

Fernanod Pereira said:
I have this problema in some clients. And these clients use, as a server:
Windows Server Standard 2007
Windows Server Standard 2007
Windows 10 Pro (Not Server)

Maybe 2008 was meant, anyway, the article has a lot of sections, just scroll through it to find advice for a certain server and some clients, too.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Just one more note about this: It's also not rare all discussion about SMB and oplocks is mute and there are other problems.

Coverage logging really is a fine way to identify bottlenecks, it logs everything that causes execution time except very few things and especially as you USE tables by command you will see how long these really take, which will not show in case a DE is used. So best chances to find out more.

I also point out to a thread giving an idea how to actually get some information into the log, by making use of it logging names of procedures. thread184-1775724
You can even log current time, if you create a routine called "thetimeis"+TTOC(datetime(),1), for example all within string of a script you then start with EXECSCRIPT.
So you can even get current time on the log, followed by a normal coverage log line with exectime of the USE command.

And as already suggested, you can find out long-running single lines as easy as sorting by exectime descending.

Bye, Olaf.

Olaf Doschke Software Engineering
 
You could try this from the command line

Code:
Reg add hklm\System\CurrentControlSet\Services\Lanmanworkstation\Parameters /v DirectoryCacheLifetime /t REG_DWORD /d 0

Based on info from here:


Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.
 
Which actually is what this does:

Fernando Perreira said:
set-smbclientconfiguration -DirectoryCacheLifetime 0

really is comprehensive about what helps in what situation, when you know its oplocks. But I've also seen it so often the discussion just locks into that one topic, as if it is the only cause.

And then you never hear back. And never know whether it helped or not. And the next discussion starts based on the same level.
Get information, Measure. I know the deal is to solve a problem and not just analyze it, but you also never know how much this helps and accelerates things if noone ever uses a chance for further investigation.

Bye, Olaf.



Olaf Doschke Software Engineering
 
of course Olaf, you are right!

Regards

Griff
Keep [Smile]ing

There are 10 kinds of people in the world, those who understand binary and those who don't.

I'm trying to cut down on the use of shrieks (exclamation marks), I'm told they are !good for you.
 
It's okay, Griff, everyone saw me overlooking things, too.

I also understand when a problem causes panic mode, but you see it just takes one or two lines in code to have some more log information and that helps both with a healthy and with a problematic application, no matter if the problem comes from the hardware or LAM settings or is inherent. Measuring times and tracing what happens is a quite usual aspect also to have a long term recording of the norm and fluctuations and indeed coverage can also be done at runtime, not just within the IDE.

Bye, Olaf.

Olaf Doschke Software Engineering
 
You are absolutely right.

I already implement some logs, and is the USE method that takes the seconds.
I will analyse with more details.

And will also test just closing tables when closing forms.

Thanks,
Fernando Pereira
 
Okay, I bet COVERAGE will be more precise, but nonetheless, I also see you USE IN 0, so we are talking about the opening step. Perhaps TABLEVALIDATE 2 is not the best option as it is not just about a check when opening or closing. Try what you gain by actually turning it completely off and SET TABLEVALIDATE TO 0. Yes, I mean that, though you risc a reccount error, build in a simple dbf fix opening a table with validate at 0 and then append blank and delete, that fixes the reccount again, least problem to worry about.

And then, do you log with current time? Can you see concurrency, see who else last opened a dbf before someone had a long use time? Also log when someone closes a table? Have a count on how many users have a table open? It's all not automatically in coverage logging, but those logs with a little help of some markers of current time would enable to see that as precisely as system clocks are synced with an internet time server.

Bye, Olaf.

Olaf Doschke Software Engineering
 
One idea that I got while going for a walk: Did you compare USE with any other file opening, especially FOPEN in read mode and for comparison also with read-write mode? Any file, not a DBF, in the same share. Just to see what portion is spent on network connection, permission checks, and getting the handle. VFP will not do much else in regard to that. Then another operation and its time consumption of interest would be Fseek(handle,0,2) on files opened.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Hi,

I applied only two changes to see how it was today.

These being:
[ol 1]
[li]Opening the table when necessary, and only closing it when closing the form.[/li]
[li]Open table using "Use Database!tableX AGAIN SHARED IN 0" and not "USE 'X:\myApp\DataBase\myTable.dbf' ALIAS 'tableX' AGAIN SHARED IN 0" [/li]
[/ol]

The first situation should only be reflected after the first use of the table.
The second whenever a table is opened.

The table that I had to take 20 seconds to open, became instantaneous ... And that only happened after applying point 2.
the other point I left to validate the application's behavior over time, that is, if it was getting heavier, and this also didn't happen.

Conclusions:
[ul]
[li]Opening a table only when necessary, leaving it open and only closing at the end seems to be better.
But here I have to be careful with the Windows cache and the records are not lost.[/li]
[li]Opening the table through the database is better than using the dbf file path.
Is it because here windows does not manage in the same way?[/li]
[/ul]

In your opinion, any conclusion of this experience?

Thanks,
Fernando Pereira
 
First of all I don't think you can make definite conclusions after one day of changes. Even though, congratulations on getting use to instantaneous on that table, but, well, you could just have been lucky. Ine thing is okay though, we have discussed and confirmed that, keeping tables open means more often you have the situation 2 or more clients use a file and none of them has an oplock.

But once more: oplocks themselves don't take time, breaking oplocks takes time. And you never can avoid getting an oplock. One user will always be the one that uses a file. And if you change your way, start your app in anticipation of the 20 second wait in a celebration like a new shop opening as the first user of the day, you don't have a 20 second wait. Because you're not the one that has the problem, then.

It's the second user, not the second at that day, the one who gets the second file handle on the file. If the first user of a dbf close the dbf before a second user uses it that also has no problem. As you had this problem regularly, though, the situation of a second user must be quite often. Now you will still always have a second user and that will have a long USE time. unless the first use was so recent, that breaking the opßlock isn't much effort.

And what is breaking an oplock, what has VFP to do with that? Well, nothing. The VFP runtime was as is, before that mechanism was introduced or changed to the bad behavior for VFP and also Access and other software. As oplocks are not real locks, a second user never gets rejected access, a second user demanding shared use of a file gets that, but before he gets that the OS breaks the oplock, and the first client also isn't asked whether he likes that or not, the system does that, but whatever goes wrong on the level of OS clients now asked to flush their cache, maybe VFP is involved with its caching mechanism here. It shouldn't, as VFP has no idea it had an oplock, you can't detect that you have one and VFP does only act on files differently when it knows it has real exclusive access.

So, I don't know, I'd always expect someone to have once a long USE, that's only never happening when there never is the concurrent use of a file. But what's better in going through this phase of one user getting this "penalty" is that when now a third or fourth open the file and the number of handles never goes back to 1, no oplocks will be given anymore.

There is one much simpler solution, though, and that is to prevent them from happening at all. Did you even look at the link I gave I think 3 times already?
Now, regarding your changes: USE database!longtablename will finally also use the same file, workarea and alias name have no influence whatsoever on oplocks. And the only thing that changes is that you look at the DBC once more. But there is another reason this actually make almost no difference for DBFs of a DBC: If you open a DBF by its file name without reference like database!longtablename, you still also open the DBC or verify the consistency of the backlink of a DBF to its DBC stored in its header. Did you ever notice? If you close all tables and databases and just use a DBF of a DBC you also have the DBCopened automatically? So you have a flurry of side actions there anyway, no matter how you do it.

Once a DBC, you have a high chance it's completely read into memory, as a DBC isn't very large, unless you add a lot of stored procs into it. Then this flurry of side actions also becomes inexpensive. Plus all users use the DBC file and will surely only rarely close it.

So, all in all, I only expect 1 to make a difference. And it also only plays a role if despite all settings to turn them off, oplocks still are made.

Again 2008 Server has a solution for that, so you can then also spare changes, almost. I think there is another good aspect of rather long term use of files, I never had good experiences with the tidyness of closing everything you can right away again, you always need to build up from scratch again.

Regarding saving into the dbf file. In buffering you can see the dbf stored value with curval(), from any client, whether you have buffered changes yourself or not, this is also a chance to even predict you get write conflict when your buffered value differs from both curval() and oldval() and so tableupdate wants to change the DBF, but detects that while you changed something from oldval to your workarea value curval() also changed. That's not a technical problem, but the way conflicts are meant is, look, pay attention, someone else already changed the value differently than you wan to. You can always choose the force option and disregard that, making the last change win.

Effectively you have that situation without using buffering. all changes go to the dbf, so whoever saves last saves his change.

Bye, Olaf.

Olaf Doschke Software Engineering
 
To some more practical advice about how buffering is in general not bad for having control about what is saved to the DBF.

You hinted on closing DBFs also to ensure even caches are written. On one side you never have full control over what the OS and hardware do with their caching on top of VFPs, which also is on top of and separate from buffering. You could still be victim of a false assumption about not buffering. The only time data is saved back to conrolsource is when a control loses focus. You don't write every single keystroke you do in a textbox or editbox or every choice made in a combobox. VFP writes only after valid says ok. And you can't trigger that by calling valied, focus has to change. Mike can sing you a song about what that means with a save button on a toolbar. It's general VFP base knowledge the toolbar save button needs to ensure focus change from the form.activecontrol, usually using the trickery to set focus to what already has focus, as that also causes the whole chain of events.

On the other side you can verify what's in the file. There are two things that read from the DBF: SQL-Select and CURVAL(). CURVAL() only works with buffering, but let me show how that could be your verification and how buffering isn't a problem with write conflicts. If you worked with no buffering, you accept the "last change wins" strategy of shared data access and that's easy to do with the force option of tableupdate.

So here we go, all essential code that's usually in several parts of a form:
Code:
* sample data preparation
* only need to run this once.
* can run multiple times anyway
Close Tables All
Close Databases All
Cd Getenv("TEMP")
Erase oldvalsample.*

Create Database oldvalsample.Dbc
Create Table oldvalsample.Dbf (iid Int Autoinc, cData c(15) Default "blank")
Append Blank
Use
Close Databases All
* only need to run this once.
* ------

* clean start
Close Tables All
Close Databases All
Clear
Cd Getenv("TEMP")
Set Exclusive Off
Open Database oldvalsample.Dbc

* preconditions:
Set Multilocks On

* init/load
Use oldvalsample.Dbf In 0
Select oldvalsample
CursorSetProp("Buffering",5,"oldvalsample")

* Now working on data (anywhere and anytime within form lifetime)
* Update yourtable...
* Insert into yourtable (...)
* Replace
* Append Blank, whatever, here let's just do
Select oldvalsample
? 'initial'
? oldvalsample.cData, Oldval("cData","oldvalsample"), Curval("cData","oldvalsample")
* will show blank, blank, blank, all initial values

Replace cData With "new"
? 'after replace'
? oldvalsample.cData, Oldval("cData","oldvalsample"), Curval("cData","oldvalsample")
* will show new, blank, blank, you only changed the workarea buffer until now

* save button
If Not Tableupdate(2,.T.,"oldvalsample")
   ? 'whoops, something went wrong'
Else
   ? 'after save'
   ? oldvalsample.cData, Oldval("cData","oldvalsample"), Curval("cData","oldvalsample")
   * will show all the same new, so notice, after tableupdate oldval is updated,
   * the new saved value becomes the old value which later is the basis of judging conflicts
   * (if you don't force your changes)
EndIf
* change user 1 buffer in preparation of demo about workarea, buffer, curval, oldval
replace cData with "buffer"

* now demonstrating when oldval and curval could differ and
* proving curval doesn't mean your current (buffer) value
* but current dbf value
* this happeing from another user:
? 'user2'
Use oldvalsample.Dbf In 0 Again Alias curvaldemo
Select curvaldemo
CursorSetProp("Buffering",5,"curvaldemo")
? 'initial'
? curvaldemo.cData, Oldval("cData","curvaldemo"), Curval("cData","curvaldemo")
Replace cData With "change" In curvaldemo
? 'after replace'
? curvaldemo.cData, Oldval("cData","curvaldemo"), Curval("cData","curvaldemo")
* change, new, new
* save button
If Not Tableupdate(2,.T.,"curvaldemo")
   ? 'whoops, something went wrong'
Else
   ? 'after save'
   ? curvaldemo.cData, Oldval("cData","curvaldemo"), Curval("cData","curvaldemo")
   * will show change, change, change
   ? 'What user 1 sees at this time:'
   ? oldvalsample.cData, Oldval("cData","oldvalsample"), Curval("cData","oldvalsample")
   * will show new, new, change
Endif

* back to initial user
? 'user 1'
Select oldvalsample
Replace cData With "last change"
* seeing the conflict: your change from oldval to sys(2015) differs from current value "changed" from other user
? 'after replace'
? oldvalsample.cData, Oldval("cData","oldvalsample"), Curval("cData","oldvalsample")
* will show last change, new, change

* neverhteless, last change wins:
* save button
If Not Tableupdate(2,.T.,"oldvalsample")
   ? 'whoops, something went wrong'
Else
   ? 'after save'
   ? oldvalsample.cData, Oldval("cData","oldvalsample"), Curval("cData","oldvalsample")
   * will show last change, last change, last change
   * last change wins.
Endif

Essential things to take from this sample:
1. Curval() only changes with Tableupdate, it reflects the currently stored value of a field, so it can be your validation of save.
And this is the good news about this: You can test and convince yourself, whether TABLEUPDATE works as it should in a network with its configuration and behavior. And I've seen things not working as expected, there's quite a lot of variation of what to better use or not use. But once you see Curval() confirming it reads back what you write you're not needing to implement this as a permanent validation step (you could, again in the interest of login and seeing changes from the norm). But you see buffering and tableupdate give you a means to save and have a definite point in time for saving changes that does not need to close a dbf.
2. Tableupdate with force option knows no conflict, this comes closest to no buffering in avoiding to need any confllict checks, changes come as they go. You may experiment with .f. for no force
3. workarea (buffered), oldval and curval can be 3 different values
4. oldval is updating once you tableupdate and flush your buffer, it doesn't stay what was value when you firs USE a table

And if you remove the replace cData with 'buffer' you also see user 1 will see the change of user2, this just shows buffers don't decouple you from other users changes, only after you store something in them, they start empty, not with all old values. When you have nothing buffered for a record, the dbf also is read and a workarea field becomes curval, too, just like working unbuffered.

So there also is no need to fear totally different behavior from buffering, you're not isolating yourself from other users' changes, you're only isolating others from your changes until you commit them.

Andy Kramer once wrote a lengthy article about why you never need row buffering. The essence is you can also do single row table updates yourself in table buffered mode. And you have better control when they happen. You can really struggle even when you have events like grids before and after rowcolchange. Taking it in your hands you then also get .f. from a tableupdate in the worst case, not an error triggered by some code that changes record or even just a grid click, making the source of this error extremely diffuse.

The only conflicts you could get with the force option of tableupdate are about violating table and field rules, also violating index uniqueness if there's any chance at all, but those things will then also happen to you working unbuffered and if you don't have them you also don't introduce them just by switching to buffering.

And last not least: You don't need to establish edit mode or a save button, you can autosave per timer or you make focus changes already events to tableupdate, if you want to come closest to unbuffered modes. But you see how much more actions non-buffered working on DBFs mean to the file system and why that might also be a reason to have problems in the first place. Buffering means less frequent acting on DBF files, less concurrency, less problems.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Hi,

First of all thanks to everyone who helped me.

I was able to stabilize and solve the problem, and the solution went through:
[ul]
[li]Cache settings in smb[/li]
[li]the tables with the most impact open only once, and close when the menu is closed.[/li]
[/ul]

Best Regards,
Fernando Pereira
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top