We have a huge database with one table having over 1,000,000 records (okay, that's huge for us!).
We get data from about a dozen different customers and put it all into an SQL table. The data is for the sale of the same product (livestock), but each customer has its own way of reporting to us as to the quality and grade of the animals.
As an example, for Customer 1 the table has 25 fields updated of the 50 fields in the table. Customer 2 may have 30 fields, but all in all, there are only a dozen common fields between the customers.
The data from Cust 1 in column Z has a totally different meaning from Cust 2's data in the same column even though they call it the same thing, and technically should be in different fields.
I am in the process of creating separate tables for each customer and copying the data over. Then I will create one master table (or maybe a view) with the comparable common data.
Does this sound like it will be more efficient or should we keep it all in one table? Any ideas? Anyone?
thanks.
Thanks!
Barb E.
We get data from about a dozen different customers and put it all into an SQL table. The data is for the sale of the same product (livestock), but each customer has its own way of reporting to us as to the quality and grade of the animals.
As an example, for Customer 1 the table has 25 fields updated of the 50 fields in the table. Customer 2 may have 30 fields, but all in all, there are only a dozen common fields between the customers.
The data from Cust 1 in column Z has a totally different meaning from Cust 2's data in the same column even though they call it the same thing, and technically should be in different fields.
I am in the process of creating separate tables for each customer and copying the data over. Then I will create one master table (or maybe a view) with the comparable common data.
Does this sound like it will be more efficient or should we keep it all in one table? Any ideas? Anyone?
thanks.
Thanks!
Barb E.