Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Help transfering data from one table to another where records are identical. 4

Status
Not open for further replies.

Luiz Eduh

Technical User
Jan 10, 2012
51
US
Hello,

I need some help from the experts.

I have 2 tables that have the same data on the first column (Name:prodNum type:character width:8) The 1st table doesn't have a description for each prodnum, the second table does. How can I transfer the description from table 2 to table 1 where the prodnums are identical? By the way the Description field of table 2 is formatted as "character" type, width:28. I have created the same field on table 1 to be able to transfer(import the data)

Thank you in advanced
 
It'll go something like this.

Code:
Use table2
Index on Prodnum to temp
Select 0
Use table1
Set Relation to Prodnum into table2
Replace all Description with table2.Description

Although it's actually fairly unusual to do this sort of migration. If you have a linking field in the two tables (which you appear to have) it's more common to leave them separated and set a relation as needed.
 
dan freeman said:
...it's more common to leave them separated...

I agree on the one side. If both tables are permanent tables and data is realted as 1:n or even 1:1 data the separation can be the ideal 4th normal form of data.
On the other side, if this is an import and table 2 is just temporary import data, maybe regularly coming in, such "migration" import is a good way to go about importing such data.

I happen to know you have VFP9, webuxer, as you normally post in the other VFP forum, so why not use it?

Code:
UPDATE table1 set description = table2.description FROM table2 WHERE table1.prodnum = table2.prodnum

Bye, Olaf.
 
Great,

Thanks, danfreeman and OlafDoschke. Your input helped accomplish what I want it.
 
Which way will be faster? To work from two separate tables or to merge all the data if possible and work from one, when it is 1:1 relation? (Talking about >500K record tables) Is there work around over 2GB file limit?
 
Linousa,
It depends on your indices on one or both tables. Either could be faster, depending on how you have it set up.

Getting past a 2GB limit is a bit challenging, but doable. My first recommendation is can you archive data out of the file, and if you work at it, can you reduce fields and/or field sizes. If you've managed that all the way down, and you need greater than 2GB files, the best option is to move to SQL backend instead of FP, but that's also a move to VFP.

That's a lot of data though. I remember a table I had back several years ago that had over 8 million detail records in it, with a total record width of around 120 bytes wide. That was still a "small" file at around 960MB. So look for ways to take bloat out of your file. I remember spending months over the course of several years to get this table down. I inherited it form someone else when it was around 200,000 records. At that time it had 18 indexes. I eventually got that down to 2 after about 3 years, and then down to 1 a year later after re-writing a bunch of other areas of the application so that it would work best off a single index. (It was a compound index, but I think 2 or maybe 3 fields to create unique keys).

So for the 2GB, I would consider archive, which you might still enable in "attached view" from within the application, or create a separate instance where archive data is kept, but this is not as elegant.

Best Regards,
Scott
ATS, CDCE, CTIA, CTDC

"Everything should be made as simple as possible, and no simpler."[hammer]
 
The nature of importing vs. using separate tables is not a question of what is faster, but what is correct to do.
If you regularly get new data in dbf form, or produce a dbf from csv, the usual way is to then import this into your database tables and not keep the new data in separate tables.

When I talked about 1:n or 1:1 related data, I was not talking of SET RELATION, but of how tables are related to each other in database terminology. The two things are related - no pun intended, of course a SET RELATION can be made, if there is a data relation between the tables, but a table relation is done by a foreign key and normally will be used via SQL with JOIN in VFP. We're in legacy Fox here, so that is no option, but as said I know webuxer uses VFP9, and I think you, too, right?

Dan has given a relation example. It's your only chance, if you have overall more than 255 fields.

In regard of a [tt]RELATION[/tt] vs. [tt]SELECT * FROM table1 LEFT JOIN table2 on table1.Prodnum=table2.Prodnum[/tt] a relation will always seem faster, as it right away only joins one row of table2 data to the current table1 record. But in general SQL is capable to do more than simple relations and its easier to handle one cursor with all data in it, to bind to that, etc.

I have used cases, where I know I loaded all ~1000 user records in an application anyway for frequent access and have several RELATIONS of one main cursor to users with several aliases usersX, usersY in each userid field to then let a grid display usersX.name, usersY.name, etc, instead of joining users into another main cursor. I'd only use that in situations the table the relation ends in has just a few records (1000 is a few for sure). It's also not always possible, eg a listbox doesn't let you put several workareas into its columns, this is very limited to grids. Scott also is correct in that an index speeds that up. You can relate only via index or recno and recno relations only make sense in the 1:1 case, which is much more seldom, so RELATIONS are enforeced to be accelerated by indexes. But so is SQL - to be clearr: You're dumb, if you don't have indexes on primary and foreign keys.

Just remember: A relation just as a filter is not only done once, everytime you move the record pointer in a table with a filter to its own data and RELATIONS set to others, you cause a cascading effect. Think of it as an object having several tails with anything at the end of it, and each conneted further object can have further tails, and then you might also get into trouble with circular references or contradicting references. All these tails mean seeks, and seeks are fast by definition, how fast can still dpeend on the data volume and also you can't have more complex join conditions. In the end it only is a nice way to avoid yet another copy of all main table data decorated with a few more fields joined, if there only are a few, at best just a single relation. But if that data should be permanently inside the main tables own fields, then importing it is only done once per day/week/month, and then this isn't a question of performance anymore, you then compare only handling one table vs two with a RELATION. Then one table of course wins.

Another case than the field count limitation is you want a "live" join, which RELATIONS are, if data changes during work with the data, this will not reflect in a query result, the query result always only reflects the data at the time of the query, a relation is live into current data. If a user is deleted, the query result still has the non existing users name in its ressultset, the RELATION will then go to EOF, no user name found. It often also is an advantage you act on copies you can then later merge with DBF or remote table data.

The choice never is just about the speed of the alternatives.

Bye, Olaf.
 
I just want to add that cutting down on the number of index tags doesn't affect the size of any file other than the CDX.

Also, if the table that's bumping against the 2GB limit is the FPT, try doing PACK MEMO. Odds are there's lots of wasted space there.

Tamar
 
Hello, im back.

Im not sure what im doing wrong, I have use the update command on other tables and works fine, but for some reason i'm trying to grab data from table2 into table1 and nothing transfers; here's what I have;
table2 named counties, it stores counties.zip, counties.city and counties.county all type:character. Currently already have a table called customers which has lots of data stored (type:character) zipcodes as zip, tel number, address, city, contact name etc. I added a field called county because i need this information on the table too.

I want to transfer the county data from table2(counties) to table1(customers) where counties.zip = customers.zip.

here's the code im using;
Code:
UPDATE c:\data\customers.dbf ;
SET county = counties.county ;
from c:\data\counties ;
WHERE counties.zip = customers.zip

the code runs but nothing gets transfer. Any idea why this is happening?
Thank you
 
Your code runs fine. What is _tally after the Update? Are you having same zip codes in both tables?

Bye, Olaf.

 
Mr. Olaf,

Nothing updates, the message on the bottom of VFP9 saids, Updated 0 records in 0 seconds.
On both tables I have zipcodes, but on Counties(table) I have listed all the counties with their zipcode. Im searching the customers table, where there's a zipcode that matches on both tables, transfer the county from counties table to customers. Not sure why is not doing it.
 
What is _TALLY should be answered by ?_TALLY. I recommend you do that right after that update, to avoid seeing the status bar message for a totally different update after that one.

Anyway, if there is no update, then you have no matching zip codes.
What are the field types of zip in the two tables?

For the join condition to work, you should have identical field types and lengths. Since zips are zips, that should be the case anyway.

Table collations might also play a role. In some (eg GENERAL) upper and lower case don't matter, in others (MACHINE) they do. It's recommended to not make use of collations and keep it at MACHINE collation.
That makes it even more important to have equal case either lower or upper, and equal length anyway. You don't want to slow down a query by comparing LOWER(ALLTRIM(counties.zip)) with LOWER(ALLTRIM(customers.zip))

Bye, Olaf.
 
I'm presuming the zip code fields in both tables are character strings, correct? If they're not the same lengths then that could be your problem. I'm guessing counties might be 5 or 9 characters in length and customers might be different as 9 or 10 characters? If so then the issue is how you compare those zip fields and decide what constitutes a match.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top