Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

I need to update a table from another table each night 3

Status
Not open for further replies.
Jan 20, 2007
237
0
0
US
Hi everyone, looking for some help,

i have to tables with the same structure, one it is updated during the day, by the data entry person and i want to update the second one at nite from a batch file, so then i can use the one updated at nite in another application, i have my own reasons.

1-TENG_XREF.DBF gets update during working hours
2-TENG_SHTD.DBF, needs to get updated from TENG_XREF at nite from a batch file, that calls and exe file

so this is what i have in the prg file that later i built as an exe file, by the way can anyone point to me why the new records in TENG_XREF , does not appears on TENG_SHTD or why this last dbf do not get updated from TENG_XREF ? what is wrong in the Select sql ?
Code:
USE TENG_SHTD  IN 0
SELECT JOB_NO, SHT, LAYOUT, DESCRIP, cust_name FROM TENG_XREF WHERE LAYOUT NOT IN(SELECT LAYOUT FROM TENG_SHTD)  INTO CURSOR JUNK ORDER BY 1,2,3
SELE TENG_SHTD 
APPEN FROM DBF('JUNK')
CLOSE TABLE
close all
quit

Thanks in advance
 
Since you APPEND new data will only appear in new records, no records are updated.

If there is no data with a new LAYOUT value not found in TENG_SHTD, the SELECT query result (JUNK) will be empty and you append nothing.

Bye, Olaf.
 
Olaf,
Well what i meant with updated, is to transfer the records from Teng_xref, that are not in Teng_shtd, maybe this time i explained better.

well then this sql is wrong, by using the layout field as the one "not in" Teng_sht.dbf, all that i want is to append to Teng_shtd.dbf , whatever was added into Teng_xref, that of course, is not in Teng_shtd, that is reason i used the layout field.

i don't want to use the old method of "set relation to", to update one table from the other, since i am in the process of learning to use a better complicated way of the select sql, would like to use "a select sql" better
so in this case, what i am missing or it is wrong in the "select sql " ?
Thanks
 
What is the primary key of your table? New data will have primary keys, not in the old data. Then you can take that instead of LAYOUT.
But to get any change in TENG_XREF into TENG_SHTD new records are only one of three steps:

1. update changed data (updates or replaces)
2. add new data (inserts or append)
3. delete deleted data (deletes in both sql and xbase)

You only did step2 and if LAYOUT is a field having very few common values in all records, you likely get no new data. Overall this is not an easy thing and you may get update conflicts if there will be many employees working on data and changing same records. It's much easier to let everybody work on the central table instead of offline data to merge back into the central database. In short, that's called replication and not a simple thing.

Just to show you the tip of the iceberg: two users each start with same TENG_XREF data and both of them change data of a certain customer. Which record to put back? User Z worked on the central database and deleted the customer, what now?

Your life get's easier, if TENG_XREF starts empty each day and you simply want to add all data of TENG_XREF into TENG_SHTD, then you don't need any query, you simply APPEND FROM TENG_XREF itself. But I can't tell you what would be right, that very much depends on your data and its meaning, primary keys, what you finally want to do and how to resolve conflicts.

There's one other simple case: TENG_XREF doesn't start off empty each day, but with whatever data is in TENG_SHTD, so it's a renamed copy of TENG_SHTD. Then at the end of the day, when TENG_SHTD was untouched and a user worked on TENG_XREF, TENG_XREF simply is the new TENG_SHTD, and you replace TENG_SHTD by TENG_XREF. If you instead would first find out, which records are new to add them, which records have changed to change them, and which were deleted, to also delete them in TENG_SHTD, then you have much code and complex code just to end up on the same data you already have in TENG_XREF.

So there you also have two very extreme cases in which TENG_XREF only has new records or TENG_XREF simply is the new data itself. The normal way to check out and later check-in data is some mixture of these cases.
One thing is sure: Your database tables need primary keys and from how you describe your problem and go about it, this is a term you seem not to know at all, so, unfortunately, this seems quite unsolvable from your very bad defined outset.

If I simply take the situation as describes so far, two tables of same structure, you can identify unchanged records by finding them in both tables. You might identify changed records by having same cust_name, but you can have many Smiths and Millers and whatever frequent names. The only thing, that really identifies what record belongs back where is a primary key.

Bye, Olaf.
 
landfla said:
i want is to append to Teng_shtd.dbf , whatever was added into Teng_xref, that of course, is not in Teng_shtd, that is reason i used the layout field.

Your general approach above is a decent way to get the 'added' records from the 2nd table.

But - Question:
* Are the LAYOUT fields populated in BOTH tables and are their values UNIQUE to a single record?
If not, then it is not a good field to rely on for this approach​

You need to use a single field or an expression made up of combining multiple fields which is populated in BOTH tables and is unique to each separate record.
Examples:
WHERE JOB_NO NOT IN (SELECT JOB_NO FROM TENG_SHTD)​
WHERE (cust_name + JOB_NO) NOT IN (SELECT (cust_name + JOB_NO) FROM TENG_SHTD)​
WHERE (cust_name + JOB_NO + SHT) NOT IN (SELECT (cust_name + JOB_NO + SHT) FROM TENG_SHTD)​
etc.​

You can easily test the query yourself by using the SET STEP ON command right after your SQL Query.
If in the Development mode, that will SUSPEND the code execution, Force the TRACE WINDOW OPEN and allow you to go to the Command Window to BROWSE the resultant query cursor.

If you are 100% sure that records were added and your query is not finding them, then you need to modify your query until those new records are appearing.

Once your query is working to find the new records, your APPEND should work as you need.

Good Luck,
JRB-Bldr
 
If one of the purposes of what you're trying to do is to beef up your control of SQL, you may consider reducing your program to a single statement. Thus, assuming that you don't want to, as you put it, add to "Teng_shtd" what was already in there:

Code:
INSERT INTO TENG_SHTD (JOB_NO, SHT, LAYOUT, DESCRIP, cust_name) ;
   SELECT JOB_NO, SHT, LAYOUT, DESCRIP, cust_name ;
     FROM TENG_XREF ;
     WHERE NOT EXISTS(SELECT * FROM TENG_SHTD SH WHERE TENG_XREF.JOB_NO = SH.JOB_NO AND TENG_XREF.SHT = SH.SHT AND TENG_XREF.LAYOUT = SH.LAYOUT AND TENG_XREF.DESCRIP = SH.DESCRIP ;
       AND TENG_XREF.cust_name = SH.cust_name) ;
   ORDER BY 1,2,3

this single statement will insert into Teng_Shtd any new record coming from Teng_Xref.

Edit: this was far from working properly as initially stated.
 
Thanks guys,
Olaf, JRB and atlopes all of you are correct, i found out that the layout field value, also it is not unique, that is the reason, why nothing was getting appended, as the layout was already, in other records, so the idea is to use a concatenation as
WHERE (cust_name + JOB_NO) NOT IN (SELECT (cust_name + JOB_NO) FROM TENG_SHTD)

the above will work as i know it is unique, thanks for all your help, very appreciated, as i said this is the place to learn
 
Just notice again, this only appends new records with a new JOB_NO, it will not update TENG_SHTD with edited data from TENG_XREF, which is about JOB_NO already existing.

What you need in each and every table is a primary key, that will be the only field to check about new values. If you make it an integer AutoInc, it also becomes much easier to determine new records, as that will be all with an ID larger than MAX(ID) within TENG_SHTD.

You just solved part of the problem of merging back checked-out data, the 2 in my list above.

When you also want to detect changes of existing records you will have to examine which records DO exist in TENG_SHTD and in TENG_XREF and then take them from TENG_XREF. One easy solution to that is a DELETE+INSERT instead of UPDATE strategy, you DELETE FROM TENG_SHTD WHERE (cust_name + JOB_NO) IN (SELECT (cust_name + JOB_NO) FROM TENG_XREF), before the step to append new records. Because as you delete the common records both found in TENG_SHTD and TENG_XREF from TENG_SHTD, they will fall into the category of "new" data (at least when you work with SET DELETED ON). The only downside of that approach is you will have lots of records marked for deletion and added back, often unchanged. And it will cause a problem with unique indexes, as deleting a row does unfortunately not delete it from an index. The best thing to go about the problem is to PACK the TENG_SHTD before doing the INSERT/APPEND step, but that becomes less good the more data you have.

And another big disadvantage of the DELETE+INSERT approach is how records are rearranged. In theory, this shouldn't matter as a table is just a set of data and in (mathematical) set theory the order of items (tuples) of a set don't matter and shouldn't matter. You get in trouble with AutoInc and insert/append, that problem is there anyway, even if you only append new data, so I'll take that out of the discussion, as primary keys could be generated differently and should be, if two users check out the same data, or automatic incrementing numbers starting from the same initial value will cause double keys, when the two users put back their data.

If you want existing records to be kept where they are and being updated, the coding for that has to become much more individual per table, as it involves plenty of [tt]SET InternalTable.field = ExternalTable.field[/tt] clauses within UPDATE-SQL or [tt]REPLACE InternalTable.field WITH ExternalTable.field[/tt], so there is no simpler syntax in the xBbase REPLACE command, too.

Overall you implement, what TABLEUPDATE() does for you in case you work with a single table and buffering, it also merges any change of new, edited and deleted buffered data to a central DBF, which is kept unchanged until the TABLEUPDATE() is done. I always warn people about this, when they do so in cases they could simply learn the few settings you need for working with buffered data and the TABLEUPDATE solution, maybe even combined with transactions. What you absolutely and unavoidably miss, is the opportunity to know about conflicts TABLEUPDATE can know about from buffers. Because besides buffering new data the buffer also keeps in mind old data as it was when fetching it and can compare that to current DBF data and buffered data and see, whether these three states of a field differ and mean a conflict or only the buffered data changed from the fetched and still yet stored value and so your commit is the only commit of new data with no conflict. The three states are [tt]WorkareaAlias.field, OLDVAL(WorkareaAlias.field)[/tt] and [tt]CURVAL(WorkareaAlias.field)[/tt] and they can all be different values. An update is necessary, if [tt]WorkareaAlias.field[/tt] differs from [tt]CURVAL(WorkareaAlias.field)[/tt] and a conflict arises, if at the same time [tt]OLDVAL(WorkareaAlias.field)[/tt] differs from [tt]CURVAL(WorkareaAlias.field)[/tt], because then another user changed that data already and you have to ask, whether your value or the other user's value os correct, or you even need a third or fourth value. eg if the field is about the stock number of items, if it decreased from 10 to 9 from one user and decreases from the same initial 10 to 8 for another user, actually neither 9 nor 8 are right, overall the stock should reduce by 3 and become 7. Such data change conflicts are hard to detect working with offline data and most probably easy to go about in a central database, when this happens at separate times of the day, but is a hard problem if data is offline all day.

I doubt you will stay happy with your current approach of handling data this way. If you have the need to support users working in offline environments I agree a local database with data you check-out from central data is the only really viable solution. VFP does offer Offline Views, a well-kept secret, most never talked about, but that's also because it doesn't really let you work on offline data with the same code as done for online data already, you query a view instead of the DBF in offline mode, so that would cause a major rewrite of code for data access.

The simplest solution is having no need to go offline with data. In today's everywhere internet connectivity (depends on the country, too) you could think about putting data into the cloud, which doesn't necessarily mean Amazon AWS or Microsoft Azure or Google GCP, this can also be a MySQL database. Though don't get me wrong, online/offline here only has a weak entanglement with internet connectivity, these terms here mean the difference between working on the actual central database (online) and extracts of data (offline). An always online internet connection and a central database accessible via internet just mean all clients don't need offline data extracts, they can work on the online - and this time it means both central and online on the internet - data. That approach also means a major rewrite of code, but adds opportunities to work with the online data in a website.

Bye, Olaf.
 
Thanks Olaf, great explanation and very well constructed, i will read this all over again as it is very interesting, thanks again for your advise.
the data always will change in Teng_xref.dbf, so as you said, it is not only adding new records from it to Teng_shtd but also make sure if something gets modified in Teng_xref, we need to reflect those changes in Teng_shtd.
Thanks
 
Olaf,
A PrimaryKey will not work as I understand more people are working with a TENG_XREF.DBF which must update the TENG_SHTD.DBF
Consider: two people are each working with there own TENG_XREF.DBF with an autoincrement primary key - nothing to prevent you will endup with same ID's over every TENG_XREF.DBF, so I suggest to implent a GUID. For this a GUID creator needs to be added to the table and also a field to control a 'double update' moreover a procedure to avoid / controls the update of records already been updated by the other user.
The scenario Landfla is describing here is more complicated, unless ofcourse my understanding is wrong and we are not speaking of a multiuser application here, in which case I believe the updating of TENG_SHTD is overkill, just dont work with TENG_XREF but alsways work with TENG_SHTD.
Jockey(2)
 
Jockey, just read what I said:

OlafDoschke said:
I'll take that out of the discussion, as primary keys could be generated differently and should be if two users check out the same data, or automatic incrementing numbers starting from the same initial value will cause double keys, when the two users put back their data.

And yes, GUIDs will be the type of primary keys you need to be able to generate them independent from the main database.

Landfla said:
it is not only adding new records from it to Teng_shtd but also make sure if something gets modified in Teng_xref, we need to reflect those changes in Teng_shtd
Well, the code does not do that, you only append new data. There is no single command in xBase or SQL doing what you want. T-SQL offers MERGE, but there is nothing like that in VFP.
Also, you don't have a good way to care about offline deletions. You are far from a solution.

Bye, Olaf.



 
Jockey assumes a multiuser system (in a LAN), I assume you extract and copy data to offline notebooks, applications users use like "traveling salesman" on the road and not on site, then later putting back changed and new data. If Jockey is correct the best solution really is not to work with a secondary DBF.

If I'm right then also let me suggest some things to do to make this easier. Primary keys are already mentioned, especially GUID keys can be generated independently on central and decentral databases and won't overlap. The GUID mechanism and size of GUID fields guarantee that - in MSSQL (T-SQL) the corresponding field type is "uniqueidentifer" for a reason.

Other fields to your advantage would be a timestamp or version or datetime field you update to DATETIME() with any change of data. then you can detect changed data by comparing TENG_SHTD.timestamp and TENG_XREF.timestamp for TENG_SHTD.GUID = TENG_XREF.GUID. Where that differs you know which record is latest.

Going back to a proposed simple solution with DELETE and APPEND, notice this bad behaviour: If TENG_XREF starts as a copy TENG_SHTD and users only change some data in TENG_XREF the DELETE FROM TENG_SHTD WHERE (cust_name + JOB_NO) IN (SELECT (cust_name + JOB_NO) FROM TENG_XREF) still will delete all TENG_SHTD data, as all the old cust_name + JOB_NO combinations are still there in TENG_XREF, then finally the APPEND of TENG_XREF into TENG_SHTD simply appends all of TENG_XREF, so this way actually is quite an overcomplicated way of replacing TENG_SHTD with TENG_XREF, if you think about it.

If you do so with the data coming back from multiple users, the last committed TENG_XREF will make up the new TENG_SHTD, so you really need something more sophisticated only detecting records really changed by users and even, in that case, have a problem with two users making different changes to the same cust_name + JOB_NO combination.

Make the thought experiment, what happens:

Users A and B check out an initial state of a record with timestamp being 2017-05-15 2:05:06 pm, today User a changes this at 2017-05-19 9:14:23 am and User B changes this from the same initial timestamp 2017-05-15 2:05:06 pm at 2017-05-19 1:45:36 pm, now both users commit their data into central database, one after the other, assume User B is processed first, then his record 2017-05-19 1:45:36 pm is newer than the central database timestamp 2017-05-15 2:05:06 pm and you update the record. In the commit of User A 2017-05-19 9:14:23 am is later than the original 2017-05-15 2:05:06 pm, but earlier than 2017-05-19 1:45:36 pm, so it is not used for the update. In this specific case, you really have a conflict you resolve with the rule later changes are prioritized.

What's more important, think about records without conflict, records either only User A or only User B changed, then their change timestamp will be higher than the database timestamp, but once the record is updated, the unchanged data with the older timestamp of the other user not modifying this record is not used to update the central database, which would actually downgrade it to the previous state.

Especially this is the nature of changes you get if you oversimplify the process of merging back data only considering the primary key, so a timestamp is not only nice to have, it is quite a must have in the situation of distributed data.

Bye, Olaf.
 
Olaf,

sorry, overlooked your correct statement
Code:
so I'll take that out of the discussion, as primary keys could be generated differently and 
should be, if two users check out the same data, or automatic incrementing numbers starting 
from the same initial value will cause double keys, when the two users put back their data
and yes apart from a timestamp field also a logical -lSync- field (indicating wether to by synchronized or not) will help and speed up.
The other possible difficulties/conflicts, I have found, occur rarely but you do have to take care for. I found the easiest solution was to list those conflicts in a separate table and have an administrator decide what to do with it.

Regards,

Jockey(2)
 
Hi Guys,

Yes it is a multiuser environment and it is role is in the Lan only but i don't really care, if one user/s change at different times the same record, anyway what i am doing or pretend to do, is update and add whatever is the lattes, in Teng_xref and pass that to Teng_shdt, anyway i will run this, using "windows Sched task" at nite, so whatever was the last change in Teng_xref, it is what counts.

So i noticed something and i am getting what i wanted, please correct me if do u find anything incorrectly or if it is not the best approach or if i am missing something.

this is what i am using, it updates the existing recordsin Teng_shtd if it was changed on Teng_xref and also add to Teng_shtd if a new record have been added in Teng_xref

Code:
SELECT JOB_NO, SHT, LAYOUT, DESCRIP, cust_name FROM TENG_XREF WHERE (JOB_NO+SHT+LAYOUT+DESCRIP) NOT IN (SELECT (JOB_NO+SHT+LAYOUT+DESCRIP) FROM TENG_SHTD) INTO CURSOR JUNK ORDER BY 1,2,3
SELE TENG_SHTD 
APPEN FROM DBF('JUNK')
CLOSE TABLE
close all
Quit

So i just created an exe file, that will run each nite, and will update Teng_shtd.
any suggestions are very appreciated
Thanks
 
This code still is not updating data for existing JOB_NO+SHT+LAYOUT+DESCRIP, you only add new records.

If you have TENG_SHTD in a central file share on the LAN or could make its location a file share for all users, you better work on that single DBF without a second local TENG_XREF for every user.

Just get your knowledge about shared data access straight and program it, it's much easier to accomplish than what you're trying to do with your nightly update. It's not an update. You just show you don't yet have the slightest idea how much harder you make your life with the idea to only merge back changes of each single user at night.

Bye, Olaf.
 
Mr. Landfla,
if you dont care for changes made to tables by user1 are overwritten by user2 than I wonder what you call 'update', it seems to me you are creating a carbadge.dbf. Olaf is very much correct: you are not updating, you are adding data and in that case no use at all to work with 2 dbf's.
Regards,
Jockey(2)
 
Well, after carefully checking Teng_shtd, yes, it is only adding a record with the same values and if it is a change over the Teng_xref, it is adding that, i thou was updating the existing record but not, unless each field on each records can be compared and then if the value is different, then replace it with the one from Teng_xref.

The problem is that Teng_xref it is still a FDOS table, and i am using the same structure of that table and data an vfp 9.0 application and sometimes for certain reasons, the Teng_xref, the index files are geting corrupted and i need to reindex, so i was avoiding using this last dbf in the other application, in order to reindex if i need to do it, otherwise i have always, tell the users from the vfp 9.0 appl to get out for me to reindex the table Teng_xref, on the Vfp 9.0 I do not use index files, i only put in the load event the table in use as an alias, then, i can use thro out the appl when i need it, but if i need to reindex it , then i have to tell users to get out, so i still use the Teng_xref in Fdos and frequently the index file gets corrupted, i have not found the reason of this corruption, so i can copy each nite, TENG_XREF to TENG_SHTD by just using a XCOPY command in a batch file ad that will resolve my problem.
Thanks
 
The literal meaning of append means adding at the end. Concatenation. REPLACE (Xbase command) and UPDATE (SQL command are for updating existing records, nothing else). APPEND is like INSERT, so it is only for new data. The SELECT for records with a combination of data NOT found in the target table also only is about finding the new data not yet existing at all, not even in an older state. You begin to understand, but now you still won't find an easier way to go about merging back the data you separated from the central DBF file for the whole day. There is no easier way than to work on that central DBF file right away in the shared way.

Bye, Olaf.
 
yeah Olaf, i understand what you said but in my case, since i have the Fdos TABLE with indexes files and sometimes, in the middle of the day i need to reindex the table(Ten_xref), so as you know if the same table is in use by another diff application , then i cannot get "excl use of it" for reindexing, then i have to close the VFP app. in order to reindex the Teng_xref, that is the reason i use a copy of it(in this case Teng_shtd) FOR THAT VFP APP, that way if i need to reindex, only the the user or the two users using Teng_xref, needs to be down for a few seconds until reindex is done, only Teng_xref, is the one that gets new records and modified records, so i try to keep up Teng_shtd with Teng_xref, there the reason to try to update Teng_shtd, that s only used for searching some values and be used as a source for a combobox in the vfp appl., so i believe to make this more simple, i will Overwrite Teng_shtd with Teng_xref, i was trying no to do that, as i want to advance in VFP 9.0 specially using the SELECT SQL.
Thanks
 
Well, you can't just overwrite the file, as that kills all changes by any other user than one of them. The process that you want is much more complex than one append or one update, or one delete.

In contrast working with the single table all you need is proper sharing and settings to be able to let each user do TABLEUPDATE once in a while.

Your need to reindex points out network or network protocol problems, but that's not normal and all you need is stabilization of that. An index is not static, you don't need to reindex multiple times a day, each single data change will also update the index tags and the only need for reindexing is to limit bloating effects. Those were worst in VFP7, but are not a VFP9 problem.

Bye, Olaf.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top