Just to give you an idea of how bad a LOCATE for a record is, when there are no index whatsoever to optimize the search:
In a table with the size of say 100 MB and the searched record being anywhere in this dbf, the DBF needs to be read from start to the record matching the search, which in average means rading in 5MB just to locate one record of perhaps about 1K size. This is a waste of time and though you would expect a twice as fast HDD do this twice as fast, once VFP has read the 100MB of the DBF into it's memory, the limiting factor is not the HDD read speed anymore.
But also within cached data reading half the DBF size in average to locate one record is a waste of time.
With an index you have a binary tree of nodes, like doing a number guessing algorithm, this typically needs log2(reccount) reads to find a record, that is in a DBF with 2^20 records (about a million records) it needs at max 20 reads within the CDX ( in average rather 19) to find the right recornd number instead of an average of 500,000 reads within the DBF (or the cached DBF in memory).
See? That's a ratio of 20:500,000, and the CDX reads are even reading less bytes in general, than a record size is.
It compares to number guessing. Guessing a number of 1-1000 by guessing 1,2,3,4,5... you end up with 500 guesses in average. If on the other hand you start with 500 and half the amount of numbers which each guess, you need a maximum of 10 guesses only.
Bye, Olaf.