Tozankyaku
MIS
Oh Ye IT Gods With Greater Knowledge Than I.... Help.
Let me set the stage for this mind-bender (at least it is to me). Here are the players:
- IBM Netfinity 5100 server
- ServeRaid-4LX controller (w/ 5.10 BIOS)
- 3 1.6" 36.4GB Quantum (Maxtor) Atlas IV Ultra160 SCSI Drives
Desired setup: 3 drives in a RAID 5 configuration with 3 logical drives (10/30/30GB)
Here's the problem... I can load any one of the drives in any one of the available slots (they equate to SCSI ID 2, 4, or 9) and ServeRaid Manager will initialize the disk as a RAID 0 drive without incident. So far, so good, right? However, put any other drive in there (either one and try to go with RAID 1 or the other two and try RAID 5) and the drives after the first drive will fail, regardless of logical drive configuration.
So, for example, if I load a single drive in the second slot (SCSI ID 4) it will work as a RAID 0. Add another to the last slot (SCSI 9) and that new drive will fail. Swap the two and again the drive in SCSI 4 (previously the one that failed in 9) will initialize and SCSI 9 will fail. Add a third drive to the mix and the drive in the first slot (SCSI 2) will initialize and the other two will fail.
You get the idea; it's not a pretty picture... Given that I'm such a RAID neophyte, I'm willing to bet it's something stupid. But what is it? What am I missing? This is driving me nuts!
I know all the drive are good, since they all initialize fine individually as a RAID 0 drive. Quantum (Maxtor) is saying there isnt a compatibilty issue and IBM's technical help are scratching their collective melons as much as I am. IBM has already sent another SCSI cable, 4LX controller, and hotswap backplane. Nice new parts, but no help whatsoever. Any thoughts would be appreciated.
Regards,
Scott
Let me set the stage for this mind-bender (at least it is to me). Here are the players:
- IBM Netfinity 5100 server
- ServeRaid-4LX controller (w/ 5.10 BIOS)
- 3 1.6" 36.4GB Quantum (Maxtor) Atlas IV Ultra160 SCSI Drives
Desired setup: 3 drives in a RAID 5 configuration with 3 logical drives (10/30/30GB)
Here's the problem... I can load any one of the drives in any one of the available slots (they equate to SCSI ID 2, 4, or 9) and ServeRaid Manager will initialize the disk as a RAID 0 drive without incident. So far, so good, right? However, put any other drive in there (either one and try to go with RAID 1 or the other two and try RAID 5) and the drives after the first drive will fail, regardless of logical drive configuration.
So, for example, if I load a single drive in the second slot (SCSI ID 4) it will work as a RAID 0. Add another to the last slot (SCSI 9) and that new drive will fail. Swap the two and again the drive in SCSI 4 (previously the one that failed in 9) will initialize and SCSI 9 will fail. Add a third drive to the mix and the drive in the first slot (SCSI 2) will initialize and the other two will fail.
You get the idea; it's not a pretty picture... Given that I'm such a RAID neophyte, I'm willing to bet it's something stupid. But what is it? What am I missing? This is driving me nuts!
I know all the drive are good, since they all initialize fine individually as a RAID 0 drive. Quantum (Maxtor) is saying there isnt a compatibilty issue and IBM's technical help are scratching their collective melons as much as I am. IBM has already sent another SCSI cable, 4LX controller, and hotswap backplane. Nice new parts, but no help whatsoever. Any thoughts would be appreciated.
Regards,
Scott