Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Unusual order behavior 3

Status
Not open for further replies.

Scott24x7

Programmer
Jul 12, 2001
2,791
10
38
JP
Hi All,
I have a combobox control on a form that I use to control filter conditions by listing the contents of a code table. But before I add those into the combo's source control array, I add a few other options at the start (Like reset all, and two other options based on the setting of a toggle).

The problem is, I have the code table as part of the data environment, with an order set on (so they appear in alpha order). But the requery() of the control seems to take place before that is set up.
So I tried to force the order with the following code:

Code:
DIMENSION This.SourceArray(RECCOUNT('CTSTATUS'),1)
*
This.SourceArray (1,1) = "Show All"
This.SourceArray (2,1) = "DC Related"
This.SourceArray (3,1) = "Non-DC Related"
*
SELECT CTSTATUS
SET ORDER TO COMPSTATUS   && UPPER(COMPANYSTATUS)
GO TOP
DO WHILE NOT EOF()
	This.SourceArray(RECNO(),1) = CTSTATUS.COMPANYSTATUS
	SELECT CTSTATUS
	SKIP
ENDDO

Yet much to my surprise, it still doesn't respect the order of the CTSTATUS table. The entries in the array are listed in the order of the native table without an index set on.
I'm baffled... any idea how I can get this in the order I am expecting (based on the index)?

Very oddly, another control on the same form which is used as the listbox options for setting the value at the client level works fine:

SELECT DISTINCT COMPANYACCOUNTMANAGER FROM COMPANY INTO ARRAY This.SourceArray

That control has the entries in the order of the Index.



Best Regards,
Scott
MIET, MASHRAE, CDCP, CDCS, CDCE, CTDC, CTIA, ATS

"Everything should be made as simple as possible, and no simpler."[hammer]
 
Good morning Scott,

I think the problem is in this line:

Code:
This.SourceArray(RECNO(),1) = CTSTATUS.COMPANYSTATUS

This line will always add the array entries in the physical table order, regardless of which index is set. You need something like this:

Code:
SELECT CTSTATUS
SET ORDER TO COMPSTATUS   && UPPER(COMPANYSTATUS)
lnItem = 1
SCAN
  This.SourceArray(lnItem,1) = CTSTATUS.COMPANYSTATUS
  lnItem = lnItem + 1
ENDSCAN

(I've changed the DO WHILE loop to a SCAN loop, but that is incidental, and does not affect the solution.)

Mike





__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
RECNO() always stays the physical record number, no matter what index order is applied.

You simply want to SCAN and then ADD to an array with DIMENSION one by one or once and then add to row number you determine with a counter starting at the first row after your default items.

RECNO() is NOT working like T-SQL ROWNUM() renumbering records of a workarea while querying or applying an index to sort.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Mike and Olaf,
Thanks, that is exactly what the issue was. I suppose the "clues" where there, but I was blinded by the SET ORDER clause. I made the quick change, and redimentioned the Array at the start to include my 3 "forced" options, and got exactly what I was looking for.

So lesson learned there, I did not realize the RECNO() on an ordered table would end up ignoring it, but I do get why. So that's a breakthrough. Wonder how many times this has caused me an issue over the years, and not realized that's what the problem was.

Also, is SCAN introduced in VFP or did it exist in 2.x? I've never come across it before.

Stars to you both.


Best Regards,
Scott
MIET, MASHRAE, CDCP, CDCS, CDCE, CTDC, CTIA, ATS

"Everything should be made as simple as possible, and no simpler."[hammer]
 
Glad to have been able to help, Scott.

RECNO() can be a bit confusing. For example, GO <record number> will always go to the record number in physical order, but GO TOP will go to the first record in index order.

Regarding the SCAN / ENDSCAN loop, that's always been in VFP, and I've a feeling it was also in Foxpro 2.x (but I'm not completely sure about that; it was definitely in dBASE IV, but not FoxBase Plus).

SCAN / ENDSCAN is not only very much faster than DO WHILE NOT EOF() / SKIP / ENDO, it has other advantages as well. You don't need to explicitly GO TOP before you start the loop; you don't need to SKIP inside the loop; and if you change the work area inside the loop, you don't need to change it back again. It's definitely worth getting to know.

Mike


__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Mike Lewis said:
Regarding the SCAN / ENDSCAN loop, that's always been in VFP, and I've a feeling it was also in Foxpro 2.x (but I'm not completely sure about that; it was definitely in dBASE IV, but not FoxBase Plus).

If it was in dBaseIV it was in FoxPro 1.x at SOME point. They didn't reach "full" (it was never full) feature parity until 2.6.

Foxpro 1.x was actually delayed quite a while waiting for dBase IV's release. Compatibility wasn't quite complete and they added a few missing features with each subsequent release, but the major pieces were there.
 
It's also worth noting SKIP +1/-1 with an order set by an index is navigating records in that order and not in RECNO() order. So the legacy WHILE loop also works. But Mike already told about the SCAN..ENDSCAN advantages. It's mentioned very regularly ever now and then, wonder why this is again something new for you.

You could also use RECNO() again if you SELECT ... ORDER BY ... INTO CURSOR (NOFILTER/READWRITE), as then the cursor records are added in the given SQL order, the index is used to optimize creating that result, but that takes the extra time of reading data and writing into a new DBF (cursor), even though that's eventually inevitable, amount of byte reading anyway, also if you just SCAN the DBF after SET ORDER. All that's changing is, the top is now not defined as GOTO RECNO 1 but by first index node in that CDX tag, navigation for next/previous record is done by visiting tree nodes/leaf nodes in a certain way to get data by order that index represents. I always confuse what's done, I think sibling first, then going down a depth level you get to the leaf nodes of an index tree in tag order. Unlike MSSQL clustered index VFP has none that has data itself at leaf nodes, they always are storing record numbers that point to a physical file offset as HEADER()+recordnumber*RECSIZE(), and file size is HEADER()+RECCOUNT()*RECSIZE()+1, with +1 for the final EOF byte after the last record. That's also a reason RECCOUNT() includes deleted rows.

With this offset formula and by the nature of DBF records all having fixed size this means knowing RECNO means knowing offset in the DBF file, you don't need to traverse a linked list of nodes or such things.

IN MSSQL you have a hierarchy of pages and within pages I think something like a linked list manages the varying size records in it. VFP manages varying size memo as 4 byte in the DBF being a pointer into the FPT file, where you do have varying size data. VFP avoids that concept with this simple offset logic, but has the overhead of a double read of Memo, Blob, General and such field types.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Mike, Dan, Olaf,
Thanks for the further depth on this. I think the reason I missed out on this was I got onto Fox right at 2.0, and then 2.6 carried us a very long way. When I made the transition to VFP, it was a long slow one, and it took me a while to grasp the new OOP methods at that time. So some habits just hung around from the systems I had learned on, and the methods I'd used. Then I had a 15 year break starting around VFP 7. When I got back in VFP 9 was the last version, and I think it was already "dead" for a few years. But... it's so powerful and my skill set was pretty atrophied, I knew I'd need 2+ years to get back to a reasonable developers level, and I had this mound of code base from VFP when I was learning it (I mostly made the leap using a set of training CDs that were based on VFP 6, around 2002.) So I missed a lot,


Best Regards,
Scott
MIET, MASHRAE, CDCP, CDCS, CDCE, CTDC, CTIA, ATS

"Everything should be made as simple as possible, and no simpler."[hammer]
 
SCAN/ENDSCAN has another advantage Mike didn't mention. It's subtle. The command is scoped. That is, it supports FOR, WHILE, REST and NEXT <N> as part of the command. That makes it "set-oriented" and it also makes it Rushmore optimized.

It's so doggone useful I can't remember the last time I wrote a DO WHILE loop. [smile]
 
Good point, Dan. One small point to keep in mind is that if you use a scope clause, that will override the behaviour of the SCAN always starting at the top of the table. Come to think of it, that's fairly obvious - and probably what you would expect.

Perhaps the biggest advantage of SCAN / ENDSCAN over DO WHILE NOT EOF() / ENDDO is performance. Tamar has done a lot of timing tests on this sort of thing. She says that, looping through an unordered table and doing nothing else, SCAN / ENDSCAN takes about 70% of the time of DO WHILE. (But that's not always the case if an index order is in force, and occasionally DO WHILE can be faster - but not usually.)

Source: "We Used to Do it That Way", Southwest Fox 2008, see
Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Wow, that is cool. I think you are trying REALLY hard to get a star on this one Dan, and you just got it. :)

I will go looking for the DO WHILE's I have in my current project, and endevour to SCAN/ENDSCAN them instead. (That may take a week or two, but the rushmore point is well worth it).

In fairness, to the other point Mike/Tamar make, I always make a point of using an index and finding a record set, and then DO WHILE <condition> to meet one criteria, like when dealing with a detail table on a set of records. The only time I use the "DO WHILE NOT EOF" is on tiny tables of less than 100 records (usually not more than 10 or 15 even), where it addresses a whole table, in which case, while rushmore might be faster, it's on tasks that don't get heavy repetition. (A Requery on a source array doesn't seem to be a big issue), so yeah, the timing thing is one things, but it is not always so crucial, but the next argument is maintainability, and consistency, so if I'm going to make the change, unless there is some heavy processing reason to do so, I'll stick with SCAN/ENDSCAN. In my current application, there really isn't any hard core processing, it's mostly a data management system, focused on client, project and knowledge management. None of that tends to have much "crunching" going on (unlike my old call rating system which had millions of records and clients, and then we REALLY cared about crunching).

All great, this has been an interesting eye opener.


Best Regards,
Scott
MIET, MASHRAE, CDCP, CDCS, CDCE, CTDC, CTIA, ATS

"Everything should be made as simple as possible, and no simpler."[hammer]
 
Scott, rushmore will only optimize scope clauses you do with SCAN..ENDSCAN, so when you do a SCAN FOR condition and that condition is rushmore optimizable. It will also only take one index tag. You already use the index order for sorting...

When you SCAN all records the 30% faster looping Tamar measured likely comes from automatisms mentioned, automatically SKIP 1 instead of you explicitly doing this.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Olaf, it's true that only the scope will be optimized.

But remember that every scoped command has a default scope. The default scope for REPLACE is NEXT 1. The default scope for REPORT FORM is ALL. The default scope for SCAN is ALL. So, really, the command is always scoped.
 
True, but there is no Rushmore optimization for ALL, I also don't see one for NEXT 1 or REST. All that's ever mentioned in Rushmore is FOR condition, maybe also WHILE condition (which could also be called REST FOR condition). So once you SCAN ALL of perhaps 5-10 records, there's nothing to really gain.

Scott, unless you do have cases, where you could make use of the scope and iterate less than ALL records, you may refactor code, but I'd only keep that in mind for any current/future code. You'll see, if you get the ~30% gain, I doubt it, not only because such iterations also won't make up 100% of code execution anyway. Concentrate on bottlenecks.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Code:
SEEK
DO WHILE
...
ENDDO

can be faster than SCAN FOR, though probably not than

Code:
SEEK
SCAN WHILE
...
ENDSCAN

For big time savings in your code, look for counted DO WHILE loops and replace those with FOR. That's orders of magnitude faster.

Tamar
 
Wow, I had no idea this was going to be such a big can of worms.
For populating options in my drop down, it's all academic, as the speed on 10 - 20 options is counted in milliseconds, but where there's serious processing on large amounts of data, then this is really useful.
For the moment, I don't have that kind of processing on any of the data, because data in this case tends to be rather static. Bring in big amounts of data, but then just view it, no real processing on the majority of it. That's unlikely to change for this application, but never know what I might dream up for it later. I miss the days of having big crunching to do. This would have been GREAT for that.
Thanks to all for the information and very interesting discussion.


Best Regards,
Scott
MIET, MASHRAE, CDCP, CDCS, CDCE, CTDC, CTIA, ATS

"Everything should be made as simple as possible, and no simpler."[hammer]
 
Mike, you seem you always want to interpret things the wrong way.

When I said
myself said:
Scott, Rushmore will only optimize scope clauses you do with SCAN..ENDSCAN
I didn't exclude SQL from Rushmore optimization. I said that in the context of SCAN loops and that is clear.
Dan then mentioned all SCANs are scoped.

Can we get back to normal here, please? I could point out other posts I did about Rushmore pointing out how SQL Rushmore optimization is superior to a FOR condition optimization. I'm too lazy to look for something and I don't need to prove myself here. But I doubt you better always use SQL and Tamar's findings about SCAN vs WHILE still holds true.

Efforts in changing anything to best-known practices are mainly raising the risk to introduce an error in something working sufficiently enough, and indeed when you're at going through all DO WHILE loop the best way to optimize it might not be to replace it with SCAN but with an SQL query, be it SELECT-SQL or UPDATE-SQL, depending on what the body of the loop does. A SCAN is just a straight-forward purely "mechanical" similar construct. But if you look even broader you might find even different solutions, so any detail knowledge about all these optimizations can become very over the top. It's merely the simpler usage of more modern concepts that make them more valuable. Judging anything new in comparison what you already know and find sufficient is turning younger early adopters to elder laggards, but you don't tear down all houses done with less efficient thermal insulation or any other newer concept. Even not, when it only takes a few minutes. Especially in cases you know even all users of your software won't get back this time in all the lifetime the software has.

Bye, Olaf.




Olaf Doschke Software Engineering
utc.png
 
I hope that I have the wrong impression of the atmosphere in this forum, I really do! No matter how trivial a question is, it has a tendency to turns into a series of messages where absolutely every aspect of barely related details are discussed. And everyone wants to have the last word....

All I can say it that it's the main reason why I have decided to refrain from answering on this forum. That, and the censorship...
 
Hi
I was wrong. A remark of Olaf to Mike was not for Mike Lewis. Since I was wrong I have meanwhile removed my remark.
Koen
 
Tbanks for that, Koen. I was aware that Olaf was addressing Mike Yearwood, not myself, but others might not have been.

"Mike" seems to be a common name in the programming world. I used to wish that I had more unusual name, but I've got used to it now.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top