I've got a table that acts as a log file for a web site. For every impression the site gets, the system adds a row to the db, which I can go to later and count impressions and unique ips. The system used to work okay, but we're starting to see some serious problems as the site has grown.
This table is now over 40 million records long. About 3-4 times a day, the index pages fill up and the server goes through a re-index. This process takes over the system and causes the db server to be unavailable for 5-10 minutes at a time. This is unacceptable (especially when this happens at 3 in the morning. The site is set to page me when we get errors. UGH!)
I'm working on a better method for tracking this stuff, but until then, there's a couple things I could do to help alleviate the situation.
First, we don't need all the records in the table. I could delete 75% of them (old records), but the server freaks out when I try to delete more than 1000 or so rows at once, and doing 30 million at that rate would take forever. Anyone know a way to bypass the automatic re-indexing while it's deleting, or just a fast and dirty way to delete 30 million records?
Also, is there a way to massage the indexes so they don't have to refresh themselves so often? Even if we could get the server to refresh the index only once a day, it would be a huge improvement, especially if we could enforce a specific time to do so.
Any ideas or steps in the right direction would be most appreciated.
Thanks,
DarkMan DarkMan
This table is now over 40 million records long. About 3-4 times a day, the index pages fill up and the server goes through a re-index. This process takes over the system and causes the db server to be unavailable for 5-10 minutes at a time. This is unacceptable (especially when this happens at 3 in the morning. The site is set to page me when we get errors. UGH!)
I'm working on a better method for tracking this stuff, but until then, there's a couple things I could do to help alleviate the situation.
First, we don't need all the records in the table. I could delete 75% of them (old records), but the server freaks out when I try to delete more than 1000 or so rows at once, and doing 30 million at that rate would take forever. Anyone know a way to bypass the automatic re-indexing while it's deleting, or just a fast and dirty way to delete 30 million records?
Also, is there a way to massage the indexes so they don't have to refresh themselves so often? Even if we could get the server to refresh the index only once a day, it would be a huge improvement, especially if we could enforce a specific time to do so.
Any ideas or steps in the right direction would be most appreciated.
Thanks,
DarkMan DarkMan