Not at all. Keep your database structure simple, using indexes where appropriate, and you'll get good performance. Complicating the structure only slows things down (in general).
if your tables are not indexed and you do a lookup of a column or a join between two tables, your query could be significantly slower with only a few thousand rows than a table of multiple millions of rows that is properly normalized data and properly indexed.
Two tables would be much slower as each search would have to run twice. I have a server with over 500 databases, many with tables exceeding 20M records with a lot more than 20 fields. No problems if the index is good.
we track by order number, group by loads and subset by products per load. The tables run at around 8 million records each for any given month with the audit trail hitting 30million+ per month.
dont be shy, just get damn good hardware , lots and lots of ram and multithreading processors.
______________________________________________________________________
There's no present like the time, they say. - Henry's Cat.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.