a lot of data in the future
What do you consider "a lot of data"?
My concern here is that you may be "prematurely optimizing" your code when it may not be necessary. jbenson001 is right. Adding a bit column to your table will do nothing to improve performance. But.... moving data out of the table may not do too much to improve performance either.
The real answer is, "it depends". If this is a "sloppy" table to begin with, then performance is going to be a problem. If the table is properly normalized, making sure you keep the table as narrow as possible, and you properly index this table, then you may not have performance problems even with millions of rows in the table.
1. Make sure your table is properly normalized. There are MANY articles out there that talk about [google]database normalization[/google]. If you don't know what this is, make sure you read a couple of these articles before moving forward with this task.
2. Keep your table as narrow as possible. By "narrow", I mean, keep the column definitions as small as possible. A narrow table is one that has a relatively small number of columns, and those columns are as small as possible. Try to eliminate or reduce your strings, remove unnecessary (or seldom used) columns to another table. You will get best performance with int columns, where those ints point to other look up tables. You see... SQL Server stores data on the drive in 8K pages. By packing as many rows as possible in to 8K pages, you will reduce your file i/o and your performance will increase.
3. Indexes. Make sure you do not have any duplicate indexes. Also, make sure that the queries executed most often are covered by an index. Indexes are the real key to optimal performance. Properly indexed, a table with billions of rows can have queries that execute in milliseconds.
-
George
"The great things about standards is that there are so many to choose from." - Fortune Cookie Wisdom