1:1 relations are special. Most of the time they are needed, because the number of total fields is more than 255. The split into two vertical halfs also could be done becuse of the 2GB limit. It's a bad choice, as a further split due to reaching the 4GB limit then would take much effort to change all code. You better split tables horizontal, if the field number limit is not the problem.
The RELATION is binding the two half in a way you actually could use REPLACE, as the record pointers are bound and move along. In such situation you may even do the relation based on the record number, as a 1:1 relation means exactly same number of records most of the time, unless it rather is a 1:0-1 relation and the right side of data are all optional fields.
So what is your situation with the two tables? Do they share a common ID each in field 1 or not? From the DO WHILE loop code example you give I detect your code already does a 1:1 relation, as you only do 1 skip and still work in table1 and table2. So table2 is autoskipped by the relation.
Let's address your performance problem: A locate of rows can make use of an index (even if the relation would enforce table2 to be ordered by a certain index), so you better use loops that locate a start point and then SCAN REST WHILE or you even begin and continue with SCAN FOR.
Your sample loop...
Code:
do while !eof()
if lower(table1.fname)="john"
replace table1.fname with "david"
replace table2.recupdate with dtos(date())
endif
skip
enddo
...can be done better
Code:
SCAN FOR table1.fname = "john"
replace table1.fname with "david", table2.recupdate with dtos(date())
ENDSCAN
...and if there is no name collision (eg no field you address exists in table1 and table2)
Code:
SCAN FOR fname = "john"
replace fname with "david", recupdate with dtos(date()) [b]IN table1[/b]
ENDSCAN
notice I added an IN clause here, stating IN table1, though recupdate is part of table2. The relation starts in table1 and the table name you specify in the IN clause has to be the table the relation starts in.
Final step then really is just one replace ALL
Code:
Replace [b]ALL[/b] fname with "david", recupdate with dtos(date()) [b]FOR[/b] fname="john" [b]IN table1[/b]
You don't profit from the fact you can do a single loop and test several different conditions and in the different cases make diffent replacements, the UPDATE-SQL also doesn't allow you to do different things on different conditions met, you simply do two updates in that complicated case and it also pays in case of REPLACES, as you never loop all rows, if you index your data according to rushmore optimization.
So it seems you're under the wrong immpresssion the repeated iteration will kill the process and mean much more time spended. No, its vice versa: Even the single full iteration is your killer, you don't make use of indexes this way, as the conditions you check are only checked for the current record with your if statements. If you do a SCAN FOR loop, that FOR clause like LOCATE can make use of an index to optimize the for condition and ideally can visit the relevant records directly and skip all rows not fulfilling the condition. That alone pays so much, that you SCAN multiple times to work on other conditions and make other changes, as you would also be forced to do in Update-SQL.
I don't say you have to change your main loops to all single REPLACES, you can work with SCAN FOR , LOCATE+ SCAN REST and other variants and test what is best. Just get the idea out of your head you're better off with visiting all rows in a single loop and do all the stuff while being there. If the idea is out of your head crush it, kill it, so it never can get back to you.
Imagine you would really travel USA and visit all homes, ask if a John is living there and if there is tell them to now listen to the name David. You rather look at your index of first names and only visit addresses you know a John is living there to tell them the news about their renaming.
Bye, Olaf.