Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

While I Was Away... 1

Status
Not open for further replies.

Scott24x7

Programmer
Jul 12, 2001
2,791
10
38
JP
I've found some interesting things in the past few months about the inner workings of VFP/FP from my return from developer retirement. So I thought I would ask about something we used to code for in FP2.x and wonder if this has been sorted in the VFP9 era.

In the "old days" if we had deleted records in a table, and we wanted to get rid of them, and that table had any index(s) with it, we would first:

SELECT TABLE
DELETE TAG ALL
PACK

<Run some routine to rebuild our indexes>

This was because index bloat and some, sometimes odd behaviors would occur if you just issued a PACK with the old index files.

Do I still need to do this in the VFP era, or have they fixed that overall issue (i.e. by maybe holding the index definitions, killing the index, and rebuilding it fresh without all that msess?)

I haven't seen this discussed.


Best Regards,
Scott
MIET, MASHRAE, CDCP, CDCS, CDCE, CTDC, CTIA, ATS

"Everything should be made as simple as possible, and no simpler."[hammer]
 
Hi Scott,

No, you don't need to explicitly rebuild the indexes - at least, not usually. Provided the index files are open when you issue the PACK, VFP will rebuild the indexes as well. That would normally be the case if you have a single CDX file (a so-called production structural index). If you have a second CDX, or any IDX files, you will need to explicitly open them first (with SET INDEX) before you do the pack. But the chances are you don't have those files.

Mike



__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
What is a second CDX?

A given DBF can have as many CDXs as you like. The structural CDX is the one that is opened by default. It has the same name as the table. But you can create further CDXs for the same table. Each will have its own collection of tags.

You create a CDX by using the OF clause in the INDEX command. You open it by using the INDEX clause of the USE command or the OF clause of the SET ORDER command.

And when do you use that guy?

Probably never.

Mike



__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Koen Piller said:
And when do you use that guy?

When you want a permanent collection of index tags that may or may not be up to date. [shadeshappy]

The whole mechanism came about when some people complained about being FORCED to use the structural CDX during early alpha/beta cycles when the feature was new. I've never used a second CDX. If I'm going to create a single-use index I'll use a standalone IDX file instead, and I can't remember the last time I did even that.
 
Scott, since you are in the process of reviewing and refactoring your code, you might find the following helpful. This is the outline of a packing routine that I put in some of my applications.

1. Start by calculating the approximate amount of space that will be saved by packing. You can do this by looping through all the tables in the database, determining the percentage of deleted records in each case. (The calculation will be approximate because you don't take account of the variable lengths of the memo fields.)

2. Notify the user of the approximate saving, and, based on this figure, advise them on whether or not to go ahead with the pack. (The user is free to ignore this advice, of course.)

3. Close all the open forms in the application (after warning the user that you are about to do so). Then loop through all the open tables in all data sessions, and close those as well. (Closing the forms doesn't guarantee that all tables will be closed, as some might be open behind the scenes, for example by a SELECT command.)

4. Ask the user to ensure that all other users are logged out, as far as is practical.

5. Loop through all the tables in the database. Issue a PACK for each of them. Error-trap the PACK command, and maintain a count of those that fail. (You don't need to do PACK MEMO. Memo files will be packed explicitly.)

6. On completion, advise the user if the number of tables that you packed was less than the number of tables in the database (that is, if your error-trap detected that any of the packs failed). This will happen if they failed to get exclusive use because a user was not logged out. Suggest they try again another time. (For good measure, you can give them an indication of how much more space would have been saved if the packs had all succeeded; this will help them decide if it is worth trying again.)

7. Finally - very important - close the application at this point. This is important because you have closed all the tables, including any that were open behind the scenes, and this might cause the application to fail. If you can also programmatically restart the application at this point, so much the better.

It looks like a lot of work, but it is not all that much code.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Just by the way, Koen, if you program for a set of many VFP applications using different versions you can have a separate CDX for example for binary index, that was one case I used this to let the main CDX be compatible with VFP6/7 clients and still be able to profit from binary indexes in a VFP9 application. With many indexes, you could also split to overcome the 2GB limitation, but if you have that problem, you'll likely have a very unfortunate insert timing with many indexes updated/extended per new record.

Scott, you could still use that legacy approach, but not long ago as a question about handling corruptions of a DBC came up I showed in detail how REINDEX from an empty CDX file also is a working option. See thread184-1786434
myself said:
And no, REINDEX is not bad. Yes, if CDX header is broken, REINDEX won't save it. It's not a metadata repair. But even if you just backup the CDX files of empty tables, right after their creation, or ANY state of them, that's healthy, you can copy that back and REINDEX and your indexes are complete and up to date again.

And the results are the same as with REINDEX, binary equal files result from that approach. Which shows to me a ZAP behavior is built into the REINDEX, the CDX is reduced to its header section only, which contains tag names and expressions. All you need to rebuild the indexes in one simple go instead of maintaining all the single INDEX on and even ALTER TABLE (as primary tags are no type you can specify with INDEX ON).

I know VFP7 had several bloat problems, but more so with Memo files than with CDX. IDX are larger, as they are not "compressed". But also don't think CDX tags are ZIP or LZ compressed. With VFP8 and 9 that was fixed, likely as I observed in that way, that REINDEX would first truncate a CDX file to its own header only anyway, and then recreate all index tags.

Backing up your data should be something you do anyway, so even in case CDX files are corrupt in their header and you can't REINDEX them, you can restore a CDX and REINDEX that, even (as quoted above from any state of a CX file, not just the empty state). And there should be one not too old, still intact and with all current tags.

For me having CDX files of empty DBF states is one of the things to backup for index restoring. Of course, once you add a tag, you need a new backup, but your dbf is not empty. Well, you can always create a fast empty version of any dbf with creating a new DBC and copy to for .f. or go bottom and copy to REST, the resulting CDX then can be used for the original DBF. That way you can also keep that up to date empty.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Just to add to Mike's useful instructions, think in terms of packing as a once-a-year thing (or once-a-month, if you do lots of deletion). If you think you need it daily, it suggests some redesign is needed because no application should be deleting that many records daily.

Tamar
 
I'm happy to hear this. In my old FP2.X days, there seemed to be a lot of index corruptions and so we created a "PACK AND REINDEX" routine. The main reason wasn't for getting read of deleted records, but for fixing broken indices. This is the area I'm still kind of torn on, because without the routine, if a table and/or index gets corrupted, unless you have a "record" of the index somewhere else, you run the risk of having a very hard time to recreate what they are needed for.

Olaf has mentioned something about "empty index" but I don't quite grasp that. While I remember .IDX from way back, I have really always only used a single .CDX in almost every case. (Since moving to VFP, I've not created an .IDX). I had expected there were probably advancements in this area, so I'm still looking for a "best practice" then for index recreation, since a corruption there isn't the end of the world usually, but something I want to fix quick if I need to.

The PACK aspect of it, yeah, even in the 2.x days, was something we would usually only do once a month or so.


Best Regards,
Scott
MIET, MASHRAE, CDCP, CDCS, CDCE, CTDC, CTIA, ATS

"Everything should be made as simple as possible, and no simpler."[hammer]
 
You should be using Stonefield Database Toolkit. It maintains metadata about your structures and indexes and handles "pack & reindex" for you.
 
I still use a routine from ages ago called "STRULIST" that dumps all the table structures and indexes into a .TXT file You just run it on place in the DBFS directory, and you have it all, so I have that as my "Backup" Crude but effective.

Best Regards,
Scott
MIET, MASHRAE, CDCP, CDCS, CDCE, CTDC, CTIA, ATS

"Everything should be made as simple as possible, and no simpler."[hammer]
 
What is difficult to grasp about an empty index?
When you have an empty table with 0 records, just the structure, and index that, all your index file has is the index tag name and expression, no data, no index nodes. Just like the DBF only has it's header, field names and types, no data.
Both DBF, CDX (or IDX) files then have the minimum size.

Storing that empty CDX file is just like storing your list of INDEX ON commands, etc. It's just already the result of it. Your code for recreating indexes reduces to COPY backups\tabe.cdx to data\table.cdx and then REINDEX. No matter what indexes you have, as long as the backed up CDX is reflecting the current indexing. And I don't see that as a problem, as code doing indexing can also get outdated, so you always have the matter of keeping this information up to date. Besides I also told how to get an empty CDX even in the situation your DBF and CDX already have grown and in case you add a TAG later: COPY TO makes it easy to copy 0 records and get the empty cdx, too, no matter what tags are in there.

Bye, Olaf.

Olaf Doschke Software Engineering
 
Hi Olaf,
It's just a different philosophy than I'm used to. I like the idea, and will look into implementing it.


Best Regards,
Scott
MIET, MASHRAE, CDCP, CDCS, CDCE, CTDC, CTIA, ATS

"Everything should be made as simple as possible, and no simpler."[hammer]
 
In nearly 30 years working with FoxBase, FoxPro and VFP, I think I can count on my hands the number of corrupted indexes I've encountered. Not something I'd spend very much time coding for.

Insist on daily back-ups and a UPS on every computer and you should minimize corruption and never lose more than a day's work.

Tamar
 
Same here. (With dbf corruption as well.)

But I've had clients who can seemingly corrupt them at will. [glasses]
 
Yeah, in fairness the bad experiences were working with systems that I inherited. And it took a long time to get them stable. When apps are stable, the issue is not so common. To Tamar's point, I have found corruption most common when there is some kind of machine failure with a power loss (and UPS's don't remove that issue).
The only other issue has been a hung application on rare occasions where you have to force the app closed with "end task". But most of the time that isn't an issue.


Best Regards,
Scott
MIET, MASHRAE, CDCP, CDCS, CDCE, CTDC, CTIA, ATS

"Everything should be made as simple as possible, and no simpler."[hammer]
 
The most index corruptions I encountered were about SMB oplocks. The oplock problem only caused CDX corruptions, indeed. And so this mechanism was handy to ensure I could rebuild them.
Besides that, we did daily packing and reindexing for that customer also for performance optimization. It also was a step taken before creating a data warehouse from about 10 databases, for which they were copied locally for fastest processing and then copied back. When the process failed no database was harmed, as the original state remained, but that also found many data errors, not only file corruptions.

Bye, Olaf.



Olaf Doschke Software Engineering
 
That's an interesting point Olaf. I have been using optimistic table buffering by default for my forms. Should I consider something else?

Best Regards,
Scott
MIET, MASHRAE, CDCP, CDCS, CDCE, CTDC, CTIA, ATS

"Everything should be made as simple as possible, and no simpler."[hammer]
 
Scott, the oplock problem that Olaf mentioned is not the same as optimistic table buffering. So the answer to your question is: No, you don't need to reconsider optimistic buffering.

You might find this article of interest: Trouble-shooting a Visual FoxPro application.

Mike

__________________________________
Mike Lewis (Edinburgh, Scotland)

Visual FoxPro articles, tips and downloads
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top