Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

things that would slow foxpro down over network are?

Status
Not open for further replies.

ralphcarter

Programmer
Aug 8, 2006
28
0
0
US
I have a foxpro application that opens 200 tables. About 1/3 of the tables are read and all of the records are written out to a report. The other 2/3 of the tables are referenced only for certain match conditions. Processes that run in 20 sec on a standalone machine are taking 3 minutes over the network. There are less than 5000 records in any of the tables. We chose not to use database just the free tables.

My question is what are the general ways to speed access to data that I can use.


Yes - we do use Locate in different situations where we are trying to find records that match conditions of fields we have not identified as indicies.

Would using SET FILTER command slow things down?
What else should I look for to speed things up?
What are some general optimization tricks?

I know this is an ambiguous description of the problem and I can maybe narrow it down by answering questions. I will admit some ignorance in this situation.

Thanks

Ralph
 
SET FILTER and LOCATE can very well choke performance, as will any network file access vs local file access.
The best thing to do is make sure you have good network equipment performance, and create indexes where at all possible instead of LOCATE.
Also, it may help to have some relations set.



-Dave Summers-
[cheers]
Even more Fox stuff at:
 
SET FILTER will certainly slow things down because Fox has to pull every record across the network before it knows whether or not it needs that record. You'll see an improvement if you can use SQL to create a cursor because you'll then be working with a few records on your local disk rather than with a lot of records on a network drive.

Geoff Franklin
 
Ralph,

I think you already know part of the answer. You said:

we are trying to find records that match conditions of fields we have not identified as indicies.

The presence or absence of indexes will make all the difference, Or, to be more precise, the presence of the correct indexes.

I would start by trying to identify precisely which queries are causing the slow-down, and then look to see if the appropriate index tags are in place. To identify the slow queries (or any other slow bits of code), either use the coverage profiler (on the Tools menu), or, more simply, capture the time of day (via the SECONDS() function) at the start and end of the process, and display the difference in WAIT window or similar.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro tips, advice, training, consultancy
Custom software for your business
 
Another technique that can be used is to copy infrequently changed lookup tables to the local pc. This will require you maintain information when a table has been changed. All updated lookup tables will need to be copied locally (perhaps at application startup. To minimize changes to your existing code base, you can give the local copies a slightly different name and use these copies when running your report process.
 
Ralph,

Is this still a problem? You've had several suggestions, but have not been back here since posting your question. It would be helpful to know if any of the suggestions have been useful (or, at least, if you've taken the trouble to read them).

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro tips, advice, training, consultancy
Custom software for your business
 
Mike, sorry for the lack of feedback. I have read all of the suggestions but, due to schedule, I have not had opportunity to act on any of them.

Thank you all for the suggestions. We have used the SET FILTER command quite extensively and this is a very large application so it will take a while for me to sort through all the code and find out where the slowdown is taking place.
A followup question on SET FILTER: One of the things we have done is to issue the command SET FILTER TO so to clear filters. Does this require the same type of traffic that setting a specific filter would?
 
It's not the setting a filter that causes the overhead. The setting doesn't take effect until you move the record pointer and then Fox has to go through the whole process of get the record, inspect the record, discover that this record doesn't match the filter, get the next record, inspect the next record, ...

By the way, you're not using a filter to decide which records appear in a grid are you? That's fearfully slow.

Geoff Franklin
 
Another thing that's guaranteed to slow your program to a crawl is to use a filter on a table that's got an active index.

Consider this code:

Code:
USE TheTable
SET ORDER TO state
SET FILTER TO state = "MS"

You'd expect this to be fast, but it's not. If you apply a filter after setting the index, VFP will go the first physical record in the table, then step through the records one a time, without regard to the index, until it finds the first one that matches the filter. With a big table, that could take a long time.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro tips, advice, training, consultancy
Custom software for your business
 
SET FILTER isn't an automatic no-no. It is Rushmore-optimizable, but following the same rules as for other commands.

Tamar
 
Well, Tamar, set filter ma not be an automatic no-no, but it's rushmore optimization is different from that of the where claus of SQL. SQL creates a resultset, sometimes that is a filter, but take that aside as the exception, rushmore will use indexes to limit the list of records it should fetch and then fetch them to the resulting cursor.

When you filter, you just trigger the search for the first record fulfilling that filter. From then every record pointer movement in the filtered table is checked to fullfill the filter condition. Even with grids set to grid.optimize = .t. you'll often see sluggish scrolling, which is a side effect of how the next or previous records are determined.

with a filter SET each of the mentioned checks of the filter condition is rushmore optimized, that is totally different from optimizing the fetching of a resultset SQL does.

the rushmore optimization of SET FILTER rathter compares to the optimization of LOCATEs.

Bye, Olaf.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top