Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations TouchToneTommy on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Randomly displaying single or multiple records 2

Status
Not open for further replies.

tsibert

Technical User
May 1, 2015
6
US
I haven’t used foxpro in a while but recently had the need to dig it out. In the past I have used foxpro to link multiple tables together based on tables having the fields with the same name and size and data types. But I’m running into a issue where when the tables are linked and the relationship from A set into B I sometimes see all the records from B and other times I can only see the first matching record but when I click into B all the records show. Never had this issue before.., any help greatly appreciated
 
I think the best thing to do is have a autoincvalue table per table you want to use an autoinc like integer value. Say autoincoftableX.dbf for tableX autoinc field. It only has to have one field and one record with the next value or the current max value, your choice.

Then have a insert trigger in a DBC or make it a hard rule of your programming to go through a routine that will:
FLOCK autoincoftableX.dbf
get and increment the value in it, store the incremented value, then use it in tableX.
Then unlock the autoincoftableX.dbf, flush and close it,
Which makes it incompatible to participating in a tranaction, so these tables are better free tables. Free tables don't participate in transactions unless you would use the MAKETRANSACTABLE function and that's not even available in VFP6, so free tables will always update directly. That's important for any concurrent users also creating an autoinc in the same tablex, even if your user/session buffers it's new id.

And I think you'll point out the concern that the autoincoftableX.dbf could go out of synch with the actual tableX.dbf, therefore MAX(id) from tableX is the sure shot and nothing else. Well, only if you never buffer and even then, only if you FLOCK tableX and that's usually too drastical, even if only temporally for creating the next id, your tableX will usually be used shared from some client already, making FLOCK of tablex impossible.

That's why VFPs feature to include the autoinc counter in a DBFs header is so nice, they circumvent any buffering and update the dbf header with the next autoinc value. That can only not be done with exclusively open DBFs or FLOCKED DBFs, but then you also can't append or insert from any other client and the client having the FLOCK or EXCLUSIVE usage can do the insert and update the header counter. It's the only way you can guarantee the synched state of the DBF with itself.
 
Last edited:
Of course, very valuable advice. I have program that 100-200 users work on at the same time. However, I created DBF tables separately for individual departments (logistics units) in separate folders(30 folders) . Each table with main data contains 10-20 thousand records. Each table has own incrementation ID implemented by SQL "Select Max(ID_File) From" and locking the table header. There is no problem with manual data entry. Only 3-4 users enter data into one table. Each user needs about 2-3 minutes to enter the data. The problem occurs during mass data import from XML. Then about 50-100 records are added in a short time. Every 5 records I make a 500 millisecond break and Flush so that VFP can "rest" for a moment. During this time I unlock the header. I display a progress bar so that user does not get bored. The program has been used for 10 years and ID data has never repeated in one table. Sometimes the index breaks (2-3 times a year). I've had a corrupted table a few times in the last 10 years. I have once of DBF recovery program to repair it. Once an IT person or user accidentally deleted the tables.

I don't use VFP SQL to update data. I use traditional commands "repl with , appe blank". I used MS SQL but it stopped working. Somehow IT still tolerates me with VFP but I hear voices that the end is coming. Someone will have to write a similar 64-bit system using more modern tools.

Thank You Chris for help.
 
Last edited:
You can go with less, I only gave my recommendation for getting the similar stability of autoinc, despite the fact that separate DBFs can easily be manipulated. It's not impossible to manipulate the header of a DBF file to set the autoinc counter to a wrong value, too, but usually the interest of manipulation isn't existing, users want a working software, too, so what you would need to defend against is sbaotage of employees that got fired, perhaps, etc.

As you lock the header, you already do prevent the simplest case of concurrently writing the same maxid+1 value as the next id, because only one client with the header lock can do that. If you would have a magnitude or two more users that header lock also already is too blocking for shared usage of dbfs, the sepearete DBF just for the use case of determining an id is locked separately and gives you a unique value that then can be put into the main table dbf even without any locking of that, just in shared access, with or without buffering, that's the big advantage, there's still waiting time for batch processing, of course, but you could add a mode to increment by 5, 10, 100 and use 5, 10, 100 IDs for your batch appending without going through lock/unlock process for every single ID.

That works, because everyone (evgery code) of course obliges to that mechanism, so even before your batch adding is done another client will get the id+101 even if that one insert is done before the batch even started.
 
In one of the programs I used increments with a common table with the structure: TableName C 10, Counter N 10. This gave good results. It had as many records as there were tables requiring increments. The procedure opened the table, found the table name, set Rlock() until successful and increased the counter and closed the table. I also gave a 200 millisecond delay. It did not affect the comfort of work.

I noticed that VFP works more stably on DBF if the write and update operations are slow. There is a 50-100 millisecond break between individual commands. Often "appe from" from a table with over 2000-3000 records puts VFP into a state of contemplation. ) Maybe it depends on the server settings. I have no influence on that.

Thanks again Chris for your help. It was an honor for me to make a few entries on this elite forum.

Don't be angry, Mike, with my old man chatter :)
 
Last edited:
A single table for all id counters is fine, rlocking allow parallel usage, too, just rlocks in my experience can be victim of the same mechanism that makes oppertunistic locks not work as steady as FLOCKs. And to allow parallel FLOCKs is possible, if you use a autoinc table per database table, that's all the reasoning for that.

I also need a delay mechanism to avoid deadlock of two clients wanting to get an FLOCK on the same autoid table, it's not hard to do that.

There is a 50-100 millisecond break between individual commands.
Not generally speaking. I think you only mean write operations to files, but even that can go fast on the VFP side, as it can delegate what has to be done to the file system. What needs time is ensuring the next line of code after say a REPLACE or APPEND will see the change in data. And besides using caching for that I think VFP does wait for the actual changes to have happened before continuing more often than not, even though VFP does make use of it's own caching on top of any other caching done by the system.

Waiting with a delay before making your clients next try to get a lock prevents a dealock mechanism on the basis of the two involved clients repeatedly trying the next lock at the same time. Delaying is allowing the other client to succeed and then finally also get a lock. You raise the chances of not deadlocking successively when the delay is different, say minimum+random time.

What adds to the effectiveness of delays, I think, without looking into the details of the involved hardware and OS layers is event queue processing. I would say a lot of things are handled with events, and to let them happen means to allow the OS to process the general system event queue, which is done by sleeping one or the other way.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top