Has anyone seen this before?
HP server with 6 disks in a raid 5 config. A disk failed and was replaced, during the auto rebuild it fails at 86%. Along with a rebuild done via the controller util. So another replacement disk was tried, this also fails at 86%. We have tried a 3rd disk, 2 different Raid cards, and even a different server. Every time it fails at 86%. So in desperation a colleague takes disks 1 & 2, places them in the spare server and performs a rebuild on just this pair of disks. He then repeats the process on disk 3 & 4, then 5 & 6. Next he takes the 6 drives and puts them back in the original server. The failed drive from the logical array is forced back online. The array is now seen to be optimum and not degraded, so a consistency check can be run. And knock me down with a feather, it completes OK. The OS has a few issues but is now running. Now the question is why did this work.
Yours confused
HP server with 6 disks in a raid 5 config. A disk failed and was replaced, during the auto rebuild it fails at 86%. Along with a rebuild done via the controller util. So another replacement disk was tried, this also fails at 86%. We have tried a 3rd disk, 2 different Raid cards, and even a different server. Every time it fails at 86%. So in desperation a colleague takes disks 1 & 2, places them in the spare server and performs a rebuild on just this pair of disks. He then repeats the process on disk 3 & 4, then 5 & 6. Next he takes the 6 drives and puts them back in the original server. The failed drive from the logical array is forced back online. The array is now seen to be optimum and not degraded, so a consistency check can be run. And knock me down with a feather, it completes OK. The OS has a few issues but is now running. Now the question is why did this work.
Yours confused