I have a server running Windows 2000 Server Sp4, Motherboard: Intel S7501HG0, 2-2.8GHz Xeon's, 2GB RAM, and 2 Adaptec 2200S RAID controllers (64MB cache each) setup as follows:
Controller 1: 2 external enclosures with 8 Seagate 68GB U320 drives. Each enclosure is on its own channel. These 16 drives make up 1 RAID5 array. This array is configured as 1 logical drive that only holds data files for an SQL database. There are hundreds of thousands of files most under 1 MB on this array.
Controller 2: 6 34GB Seagate U320 drives setup in a RAID10, and 1 34GB Seagate U320 standalone drive. The RAID10 is broken into 5 logical drives which hold the OS, SQL database & logs.
All of the hard drives have the same version of firmware.
The problem I'm having is only with the RAID5 array. Every month or two it marks a hard drive bad. We replace the drive with a new one, it rebuilds and we're fine again. Are these drives really bad, or are the I/O's so high with that many drives, and there isn't much cache so the drives can't keep up, so it marks a drive as failed? We have been over this with Adaptec and we get conflicting answers. Some people say the ideal amount of drives in a RAID5 should not exceed 8 drives. Others have said since we are using 2 channels for these 16 drives (8 to a channel) we should be ok. I just wanted to get some other opinions.
Thanks for any input!
Controller 1: 2 external enclosures with 8 Seagate 68GB U320 drives. Each enclosure is on its own channel. These 16 drives make up 1 RAID5 array. This array is configured as 1 logical drive that only holds data files for an SQL database. There are hundreds of thousands of files most under 1 MB on this array.
Controller 2: 6 34GB Seagate U320 drives setup in a RAID10, and 1 34GB Seagate U320 standalone drive. The RAID10 is broken into 5 logical drives which hold the OS, SQL database & logs.
All of the hard drives have the same version of firmware.
The problem I'm having is only with the RAID5 array. Every month or two it marks a hard drive bad. We replace the drive with a new one, it rebuilds and we're fine again. Are these drives really bad, or are the I/O's so high with that many drives, and there isn't much cache so the drives can't keep up, so it marks a drive as failed? We have been over this with Adaptec and we get conflicting answers. Some people say the ideal amount of drives in a RAID5 should not exceed 8 drives. Others have said since we are using 2 channels for these 16 drives (8 to a channel) we should be ok. I just wanted to get some other opinions.
Thanks for any input!