Elysium: see bottom of post for my swing at the curve.
dmuroff: Ok, now we're cooking with gas. Answer a question - is your download static, or will the products be downloaded regularly with regular (in terms of time) changes? If relatively static, and we can create a new big product table, then simple append queries will do it for the original import. Don't bother automating if you are only doing once or infrequently, UNLESS you've got to turn this over to an end-user to do. Regardless, first append the similar data. Now, we've got all that stuff in one big table, and we have to worry about the details. Decide on your structure. If we do the multiple tables, create a list of import file/table names, and if necessary a list of corresponding table names. Eg. crunchies.csv = tbl_crunchies, etc. Either open the 40 or so import datasets and cut and paste to new tables, or design a routine to read your list and do it for you. Designing a routine will take longer at first, but will save time if task becomes a regular thing, and will be required eventually if end-users have to manage such data transfers.
Now, if you really need to keep all your products in different tables, and NOT create a master product table, then answer this question: is ProductID always unique - between ALL tables? Can't have ANY dupes. If yes, Create a master list of productID, and link all tables as children to master list. Remember, we are NOT trying to show all description fields yet!! Just all the like fields. I don't like this method, btw.
Now do you have to see all the products, with all their descriptions in one table view? I don't think so. If you tell me yes, ask yourself why. See if there is a simpler way to achieve input/output than such a table. If the answer is still yes, this can be done with a crosstab view query, but that will be more than a bit tricky. I would not care to make that work straightaway. If I had to do that, and I have had to do similar things, I would pull everything into an intermediate flat table. This is NEVER a table that would accept input. It is ONLY for output. We will have to manage input some other way. After I have the intermediate table, I can pull up views I could not otherwise do. I can automate a query or series of queries to accept a little input from me for period or other parameters, and then let them rebuild the table. This table has no other function other than providing output, so no data is ever destroyed, and it can be rebuilt infinite times.
Sorry for the full book here, but I can't say it quicker.
Now for the curve: thwack! It's a hit! But where will it go?
If indeed, this db was created and then requested to function in a larger capacity - I would know that that it was never designed to do this, and that it would be just as efficient to build a new db and then import the data. I would then tell my client or boss or whathaveyou, what I think was needed to make such a change. We've got some structures we can keep if we like, but if we are going to increase the functionality in such major ways as POS systems, we are going to go to SQL-server or something similar. We are not going to do it "on the cheap". This means that there is either time or funding to create more robust alternatives. One of my assumptions is that the robust, but initially expensive approach, is NOT justified or desired. Regardless, this effort can be grown into a larger work. Migrating from (1) to (4), where there is one detail table, and a key to the field characteristics, should be quite doable. It will take more work in toto, but often in real life business this is the way one must move forward.
Mark
<O>
_|_