cornercuttin
Programmer
i am using a database that is hosted using sql 2000 (i think), and it is on a box that is from the same year or a year later, so the hardware and software are about 6 or 7 years old.
one of the tables in the database is about 6.5 gigs in size, and it is extremely slow to perform queries on. this box holds 9 databases, and all are being hit at random times via network inserts, stored procedures and network requests.
it is running extremely slow. the table that i am interested in holds about 9 million records, and it only gets cleared out 1 time a year when it is moved to a backup.
it seems as though a simple count(*) statement takes around 5 to 8 minutes to execute. to me, this is way too long.
is there any tricks to speeding things up? im not an ms sql guy (experience in postgre and mysql), so i dont know if there is any kind of indexing or anything i can turn on to get things cached or anything.
anyone have any tips?
one of the tables in the database is about 6.5 gigs in size, and it is extremely slow to perform queries on. this box holds 9 databases, and all are being hit at random times via network inserts, stored procedures and network requests.
it is running extremely slow. the table that i am interested in holds about 9 million records, and it only gets cleared out 1 time a year when it is moved to a backup.
it seems as though a simple count(*) statement takes around 5 to 8 minutes to execute. to me, this is way too long.
is there any tricks to speeding things up? im not an ms sql guy (experience in postgre and mysql), so i dont know if there is any kind of indexing or anything i can turn on to get things cached or anything.
anyone have any tips?