Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chris Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Speeding up processes 3

Status
Not open for further replies.

florindaniel

Programmer
Dec 4, 2009
120
RO
Hello,

I found very interesting some ideas in the previous topic (i.e. "Speeding up this process") because that's a generic problem we are often facing.

If you are kind, what about gathering some ideas under a generic topic (like this one) about do's and dont's and no-no's in order to speed up processes.

- What about using FOR clause in an Index?
- I understand it is better to SET INDEDX first and then SET FILTER (or is it the other way?); why?
- Indexing is good and speeds up but where is the limit? what if you get LARGE .cdx files
- and above all: all that in a shared networked environnement
etc.....

Thank you
Daniel
 
The best things you could do to answer many of your speed questions is read the help topic "Using Rushmore Query Optimization to Speed Data Access".
- What about using FOR clause in an Index?
I would avoid the 'FOR' clause in an index. What if the demands change? What if you now need to access those records? Why not create the index for the entire data set, then access it using a FOR clause?
- I understand it is better to SET INDEDX first and then SET FILTER...
I avoid filters. In fact, unless for some reason I am looking at a small table manually trying to research a particular thing, I don't use them at all. There is really no reason to. You can use SCAN WHILE/FOR ... ENDSCAN to access only the records you need.
- Indexing is good and speeds up but where is the limit? what if you get LARGE .cdx files
Your indexes won't ever be larger than your tables (Well, they shouldn't anyway). And what with all the inexpensive hard drives now, space isn't usually an issue. 2 gig table size is usually the limiting factor.
Granted, if you have heavy updates to tables, a lot of indexes needing to be updated will slow things down. But the best scenario is to choose wisely and normalize data.
Especially in a networked environment.



-Dave Summers-
[cheers]
Even more Fox stuff at:
 
In absolutely every case, the best answer is "it depends". What works in one circumstance may not work in others.

- What about using FOR clause in an Index?

In general, the FOR clause in indexes pre-dates the Rushmore engine. It was added when there was no other way to produce a filtered set. Use sparingly.

- I understand it is better to SET INDEDX first and then SET FILTER (or is it the other way?); why?

First, avoid SET INDEX. Use CDX tags/SET ORDER instead. Avoid filters entirely (with awareness of my first remark).

- Indexing is good and speeds up but where is the limit? what if you get LARGE .cdx files

Indexes generally only slow things down when they're updated. For a data set that is mostly query-only, large indexes won't cause problems. But every time the data is updated, each index tag must be updated. This can cause performance issues.

This is why SQL's INSERT INTO will be better than APPEND BLANK/REPLACE. The index tags will be updated only once with the insert as opposed to once for the append, and then again for the replace.

- and above all: all that in a shared networked environnement

There's no magic pill for this one. Shared access will always slow things down because VFP must return to the server to refresh its cache on every record you retrieve.
 
Hello and thank you all for your kind answers.

Well, that's true about filters, I know but in this case perhaps you can suggest me a better programming technique since the one I am using now forces me to use filters:
I use the same form to process documents, even if they are supplier invoices or customer invoices. Well, when I add lines on a supplier invoice I want to see all the products and in the customer invoices only the ones having a quantity>0. So I prepare the products table and put a filter for (quantity>0) or select an index on Product_ID For quantity>0. I prepare it in advance because there are some 80000 items and setting the filter (like I am using now) each time the user selects a product is non-productive.
On the other hand, I can not create a cursor with the appropiate products since there is more tha user at one time and the quantity can vary, acoording to the operations of each user.
I don't like myself that approach but I couldn't find a better way (until now ? :) )

Thank you
Daniel
 
Preparing an INDEX FOR somecondition is about as good as setting a filter without that index, if you only do it once. creating the index costs time, and if you don't reuse an index that is a waste.

An index on some field can help rushmore optimize where clauses that compare that field to some value, but not only the = operator is optimized, also >, <, >=, <=. The only operator not optimized is <> IIRC, but you can query where NOT field = value) instead.

SET FILTER TO <expression> is completely replaceable by SQL-Select WHERE <expression>, so it can be replaced without any restrictions about what you can filter.

Another disadvantage about indexes with a FOR clause is, they are not used by rushmore, even if the where clause of an SQL is the same, so those indexes are only good to be used by SET ORDER or SET INDEX.

Bye, Olaf.
 
Thank you Olaf,

I still can't imagine how to use Select-SQL instead of filtering in my case above but I'm working on it, hope it works.
Do you think Select from Products Where ( Qty>0) .and. <some other condition> in a grid is fast enough so I can use it each time I use the product selection form (i.e. each time I add a document line)?

Thank you again,
Daniel
 
I don't know your exact design, but having an index on the productID and the quantity fields would speed up any where clause filtering for a specific productid and a quantity>0 or whatever value you want to filter.

Bye, Olaf.
 
It actually sounds like you have a data normalization problem.

With a properly normalized database you'd create a parameterised view and requery it when needed.

The size of your source table is of little importance. The size of the result set is more important. VFP can return small result sets from large data sources quickly.
 
Almost each of those questions deserve a separate discussion. At least the general consensus is do not use 'set filter' (especially if you are using a grid). Instead you may use 'set key' or an SQL. 'Set key' is the fastest way for a grid where it is applicable.

Do not create too much indexes. They are like drugs. If you use properly you would be well but over dose and you are poisoned. Do not blindly trust Rushmore and test your own queries. Sometimes despite Rushmore saying that you are having a full optimization you may have no optimization at all.

And if you are starting this now, start with SQL server instead (or some other service based back end). It has much better engine, client/server ... you may not need to think of many details that you would otherwise.

Cetin Basoz
MS Foxpro MVP, MCP
 
Thank you Cetin,

There are indeed numerous facets of the problem... :)
and unfortunately I'm not starting on a clean sheet but I
already have this app., it is slow sometimes and I need to
patch something; unfortunately I don't have a solution to replace SET FILTER at this time. I keep searching...

Thank you again
Daniel
 
If you have to live with 'set filter' than at least hoping you are using VFP9 (in that Set Filter performs much better than it was before). Create your indexes that match to your filters as much as possible (your search expression and index signatures must match). ie:

Code:
index on upper(LastName-(' '+FirstName)) tag fullName

If you have such an index and you want to filter for those that lastname is Doe then (set exact off - default):

Code:
set filter to upper(LastName-(' '+FirstName)) = 'DOE '

or starts with 'DOE':

Code:
set filter to upper(LastName-(' '+FirstName)) = 'DOE'

Tip: Never ever use trim() in your indexes. Useless. And in some SQL Trim() is the sole responsible item for being slow.

Cetin Basoz
MS Foxpro MVP, MCP
 
Thank you again Cetin,

In short here is the situation:
I have the products table, In brief: ProductID, MagazineID, AllowUse, Stock, <some other info>
Depending on the operation and acces rights, I usually set the following filters: for a supplier invoice: Set filter to (MagazineID=<current magazine>).And.AllowUse and for a customer invoice (MagazineID=<current magazine>).And.AllowUse .And. ( Stock > 0 ).
I have an index on MagazineID and the stock can vary, depending on the operations other users do (in/out).
That's why I set an filter when I enter the lines of the invoice so I can choose easier the products. My idea was it is more efficient to set a filter once than each time I want to add a line. One draw-back I don't like is the table is always open and if an operator stops editing lines and take a (long) break, the tables are still open.

My initial idea was to build an idex like this:
public gcCurrentMagazine
Index On ProductID For ( MagazineID = CurrentMagazine ) Tag SupplierProducts
Index On ProductID For ( MagazineID = CurrentMagazine ).And.(Stock>0) Tag CustomerProducts


but I am not sure it is a good idea :), or is it?

That's how I am working now.

Thank you again,
Daniel
 
Thank you danfreeman ,

You are right, finally all reduces to proper database design and normalisation. Filters, indexes and so on are subsequent to that.
but too late to change now :((

Daniel
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top