The nature of importing vs. using separate tables is not a question of what is faster, but what is correct to do.
If you regularly get new data in dbf form, or produce a dbf from csv, the usual way is to then import this into your database tables and not keep the new data in separate tables.
When I talked about 1:n or 1:1 related data, I was not talking of SET RELATION, but of how tables are related to each other in database terminology. The two things are related - no pun intended, of course a SET RELATION can be made, if there is a data relation between the tables, but a table relation is done by a foreign key and normally will be used via SQL with JOIN in VFP. We're in legacy Fox here, so that is no option, but as said I know webuxer uses VFP9, and I think you, too, right?
Dan has given a relation example. It's your only chance, if you have overall more than 255 fields.
In regard of a [tt]RELATION[/tt] vs. [tt]SELECT * FROM table1 LEFT JOIN table2 on table1.Prodnum=table2.Prodnum[/tt] a relation will always seem faster, as it right away only joins one row of table2 data to the current table1 record. But in general SQL is capable to do more than simple relations and its easier to handle one cursor with all data in it, to bind to that, etc.
I have used cases, where I know I loaded all ~1000 user records in an application anyway for frequent access and have several RELATIONS of one main cursor to users with several aliases usersX, usersY in each userid field to then let a grid display usersX.name, usersY.name, etc, instead of joining users into another main cursor. I'd only use that in situations the table the relation ends in has just a few records (1000 is a few for sure). It's also not always possible, eg a listbox doesn't let you put several workareas into its columns, this is very limited to grids. Scott also is correct in that an index speeds that up. You can relate only via index or recno and recno relations only make sense in the 1:1 case, which is much more seldom, so RELATIONS are enforeced to be accelerated by indexes. But so is SQL - to be clearr: You're dumb, if you don't have indexes on primary and foreign keys.
Just remember: A relation just as a filter is not only done once, everytime you move the record pointer in a table with a filter to its own data and RELATIONS set to others, you cause a cascading effect. Think of it as an object having several tails with anything at the end of it, and each conneted further object can have further tails, and then you might also get into trouble with circular references or contradicting references. All these tails mean seeks, and seeks are fast by definition, how fast can still dpeend on the data volume and also you can't have more complex join conditions. In the end it only is a nice way to avoid yet another copy of all main table data decorated with a few more fields joined, if there only are a few, at best just a single relation. But if that data should be permanently inside the main tables own fields, then importing it is only done once per day/week/month, and then this isn't a question of performance anymore, you then compare only handling one table vs two with a RELATION. Then one table of course wins.
Another case than the field count limitation is you want a "live" join, which RELATIONS are, if data changes during work with the data, this will not reflect in a query result, the query result always only reflects the data at the time of the query, a relation is live into current data. If a user is deleted, the query result still has the non existing users name in its ressultset, the RELATION will then go to EOF, no user name found. It often also is an advantage you act on copies you can then later merge with DBF or remote table data.
The choice never is just about the speed of the alternatives.
Bye, Olaf.