W2K SP2 cluster service on two identical Dell 2450's connected to a Dell 650F disk subsystem connected Brocade 2800 fiber channel switches.
I have the cluster service installed on both nodes. Core cluster resources are in the default Cluster Group:
Cluster IP Address
Cluster Name
Disk Q:
I can access the Q: drive on the active node (01) via a command line or My Computer. I can execute the Move Group command to move these resources to the other node (02). CluAdmin tells me that 02 has become the active node, but I cannot access the Q: drive on 02. If I shut down the passive node (01), the cluster service on the active node (02) stops with event ID's 1016 and 1038 related to not being able to access the quorum log. Q articles have not helped. In that state, I have tried all the various startup tricks (fixquorum, resetquorumlog, noquorumlogging), but no luck. When I re-start the passive node (01), the cluster service startss and sees that the other node is not doing its job so 01 grabs control of the resources the cluster comes. 02 then re-joins the cluster as the passive node the next time I start its cluster service.
So, something seems to be keeping one of the nodes (02) from being able to take full control of the external disk.
Anyone run into something like this before.
Rich
I have the cluster service installed on both nodes. Core cluster resources are in the default Cluster Group:
Cluster IP Address
Cluster Name
Disk Q:
I can access the Q: drive on the active node (01) via a command line or My Computer. I can execute the Move Group command to move these resources to the other node (02). CluAdmin tells me that 02 has become the active node, but I cannot access the Q: drive on 02. If I shut down the passive node (01), the cluster service on the active node (02) stops with event ID's 1016 and 1038 related to not being able to access the quorum log. Q articles have not helped. In that state, I have tried all the various startup tricks (fixquorum, resetquorumlog, noquorumlogging), but no luck. When I re-start the passive node (01), the cluster service startss and sees that the other node is not doing its job so 01 grabs control of the resources the cluster comes. 02 then re-joins the cluster as the passive node the next time I start its cluster service.
So, something seems to be keeping one of the nodes (02) from being able to take full control of the external disk.
Anyone run into something like this before.
Rich