Hi All,
I've done many clusters, but this is the first time I have run into this one.
Hardware:
2x Netfinity 5100 (8658-1RY)
2x CPU
1128MB RAM
2x 9GB HS HDD (OS)
1x ServeRAID 4M PCI adapter
IBM Server NIC
3com 3c980 PCI Nic
1x IBM EXP300 + 11x 36GB HS HDD + 2x 18GB HS HDD
All components have been flashed to the latest BIOS and firmware. Raid adapters are at 6.11 package levels.
Software:
- Windows 2000 Advanced Server SP4
- All hotfixes
- Drivers at latest via Update express + Manual update from ServeRAID Support 6.11 CD
- Cluster services 6.10 (from CD)
- ServeRAID Manager 6.11 (from CD)
Arrays:
MG 1: 7x 36GB Single RAID 5 Logical, Shared
MG 2: 2x 18GB Single RAID 1 Logical, Shared, Quorum
MG 3: 4x 36GB Single RAID 5 Logical, Shared
Following is the problem:
When the (Hardware) logical drive is configured with more than one drive letter (either using multiple primary partitions or a single extended partition + logical drives) the IBM Cluster config wizard will not create a logical disk device with it, making it impossible to assign it as a resource. When the SAME (hardware) logical drive is configured with a single drive letter, it works perfectly. As some of these drives have data, and the requirement involve multiple partitions assigned to the same virtual server, this is causing major problems.
The cluster itself works properly, both nodes were configured and functioned perfectly using the 2x 18GB drive as a RAID1 for quorum. MSCS was installed by the book using the IBM Cluster support software as the launching media. In addition, cluster validation using ServeRAID manager shows that it should be fine. Drive seizures with the HTO utility work properly. As the servers themselves were existing production servers, I have rolled one back to its original configuration for now and have the cluster running as a loanwolf.
Any assistance would be appreciated.
I've done many clusters, but this is the first time I have run into this one.
Hardware:
2x Netfinity 5100 (8658-1RY)
2x CPU
1128MB RAM
2x 9GB HS HDD (OS)
1x ServeRAID 4M PCI adapter
IBM Server NIC
3com 3c980 PCI Nic
1x IBM EXP300 + 11x 36GB HS HDD + 2x 18GB HS HDD
All components have been flashed to the latest BIOS and firmware. Raid adapters are at 6.11 package levels.
Software:
- Windows 2000 Advanced Server SP4
- All hotfixes
- Drivers at latest via Update express + Manual update from ServeRAID Support 6.11 CD
- Cluster services 6.10 (from CD)
- ServeRAID Manager 6.11 (from CD)
Arrays:
MG 1: 7x 36GB Single RAID 5 Logical, Shared
MG 2: 2x 18GB Single RAID 1 Logical, Shared, Quorum
MG 3: 4x 36GB Single RAID 5 Logical, Shared
Following is the problem:
When the (Hardware) logical drive is configured with more than one drive letter (either using multiple primary partitions or a single extended partition + logical drives) the IBM Cluster config wizard will not create a logical disk device with it, making it impossible to assign it as a resource. When the SAME (hardware) logical drive is configured with a single drive letter, it works perfectly. As some of these drives have data, and the requirement involve multiple partitions assigned to the same virtual server, this is causing major problems.
The cluster itself works properly, both nodes were configured and functioned perfectly using the 2x 18GB drive as a RAID1 for quorum. MSCS was installed by the book using the IBM Cluster support software as the launching media. In addition, cluster validation using ServeRAID manager shows that it should be fine. Drive seizures with the HTO utility work properly. As the servers themselves were existing production servers, I have rolled one back to its original configuration for now and have the cluster running as a loanwolf.
Any assistance would be appreciated.