Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Clustering with xSeries 336 - lots of issues

Status
Not open for further replies.

Jemimus

Technical User
Aug 1, 2005
15
0
0
NL
(I posted this in the IBM Servers forum before I realized this would be a better forum for it)

Hi there,

We are totally new to IBM hardware, and the last few weeks we have been trying to navigate the forest of IBM software and documentations. We have gotten quite lost a few times, but we are making slow progress.

There are, however, a number of issues that we are still strugling with.

We started out downloading the newest versions of various IBM CD's, thinking it would be benificial to flash everything to the newest version, and get our hands on the latest versions of various software. Serverguide 7.3.02, UpdateXpress 4.01, Director 4.22, UpdateXpress Server, ServerRAID v8.00.

Well it turned out the ServerRAID CD's are just plain WIERD... the versions having nothing at all to do with being more 'up to date', but instead being bound more to the ServeRAID controller you actually have. However.. the version of the ServeRAID manager IS newest on the 8.00 CD (8.00.19), even though the last Windows version of the same app can only be found on the ServeRAID 7.10b CD, but its a lower version: 7.10.18

IBM site is of course NO help trying to understand all of this.

Anyway, to build the cluster enviroment, we ran a document that seemed reasonably up to date; The ServiceRAID User's Reference for all ServeRAID controllers up to the 6 series. This document is called SRAID.PDF (as are many others by the way), but this particular one can be found on the ServeRAID 7.10b CD.

So, we been happily following this guide, specificly chapter 3 - Installing the IBM ServeRAID cluster solution, and so far have managed to get our controllers configured up quite nicely.

We are using an EXP400 SCSI enclosure by the way.

We are currently tackling 3 issues.

The first is that the ServeRAID manager application for windows (version 7.10.18) doesnt seem to want to connect to the other instance of the app running on the other server, to do a cluster configuration check. Its failing on authentication it seems. You can add other Servers into the ServeRAID manager view, but when we try to add Server B, to the console running on Server A, it fails on credentials, accepting none of our domain or localadmin creds.

Another problem we have with the Windows ServeRAID manager, is that server B doesnt identify the shared drives in the enclosure as 'reserved' .. as it should be. We have turned on the 'view shared drives' option, but this doesnt help. Server B sees the disks that are owned by A, as 'ready' .. like they are unconfigured and available for creating arrays on.. even though they already contain arrays.

Finally, probably due to a wierd Windows update on W2K that installed over the weekend (stuck the server in the wrong OU so it got automatic updates! damnit!) (we havnt installed SP1 yet), the ServeRAID manager application now refuses to start correctly at all. Its almost like its a java problem, and we have installed the latest Java runtime just to see if that would help, but to no avail. The ServerRAID manager just halts at the splash screen.

Now... we have spent the last few days getting more and more lost in documentation and the like on the IBM site.. it really is a mess.. but on thing I ran into is what IBM calls the "IBM ServeRAID Microsoft Windows Server 2003 Clustering Solution version 1.00".

Now currenty, I still have NO idea what this is exactly.. my best guess currently is that its some kind of version of the ServerGuide CD... we are currently downloading the ISO at a staggering 5kb/s, so we will see in the morning.
Does anyone here have experience with this Clustinging Solutions thing? And what its useful for? One thing that disturbed me a bit was that it suggested I would actually have to DOWNGRADE the firmware and bios of our ServeRAID-6m controllers? Surely that cant be right?

All suggestions would be welcome. Thanks a lot!
 
So, why not disable the twintail driver and use microsfot clustering instead. I do not recall ever seeing IBM's clustering work.

 
Ive been working with IBM servers clustering with windows and via servraid adapter and i dont see any problems with it. It depends on what kinda clustering you want to implement.
if you want to do Ms clustering, just follow the starndard procedure from MS for clustering servers, ServerRAID manager is only use to configure your qouta disks and data disks. and also, condiguring the clustering you would hardly use servraid manager to it. more on windows environment as well.
 
Here is an update on our Cluster setup.

The first issue we had, with the ServeRAID manager instances not seeing eachother, has been solved. We simply hadn't defined any user accounts in the ServeRAID manager.. it was only when we where going over one of the ServeRAID controller manuals that we saw mention of this.. its not very clear that you have to define these accounts first.. they dont use Windows accounts at all. You can find all the options for the ServeRAID manager under the TASKS button on the main menu.

The second issue we had was the servers not properly recognizing the other servers owned disks reserved. We are not sure how we fixed this actually. My colleuge was messing around with the IBM SCSI command line tools that are included, and somehow managed to get it working as it sdhould have. This will have to remain a mystery for now, as we dont have the oppertunity to retrace our steps.

Anyway, when Server A owns the disks in the external enclosure, server B is suppose to see them as "reserved", and they will appear as dark-blue in the ServeRAID manager.

It doesnt see the other servers Hot-Spare disk as Reserved though, it sees hot-spare disks as Ready on the other server, as hotspares belong to controllers, and are not owned by one or the other system as normail array-members are.

The final issue, of the ServeRAID manager not starting, turned out to be a fluke. We think its related to Windows updates, but are not sure. We reinstalled the servers at one point and didnt encounter the issue again.

 
Im also having problems setting up clustering with x336 servers and an EXP400 enclosure. I also have the problem of server B seeing disks as ready when owned by server A. I also have a problem where when I try to install clustering the feasibility check fails with the following error...

The physical disk F: is not cluster capable.

Disk F: is the array created in the EXP400.

Did you ever solve your problems? Do you have any hints?

Cheers
 
Sounds like you are using a RAID card that doesn't support clustering. The built-in RAID controllers don't support this functionality.
 
Have any of you been able to solve this issue? I am having the same problem!
 
Server A Internal Drives attached to seperate RAID Controller (best practice)
Server B Internal Drives attached to seperate Controller (best practice)
Another channel or separate Server RAID Controller is present on both servers no attached drives and reset to factory default settings. Do not attach the enclosure or SCSI Cables to the servers until after Windows 2003 Enterprise server is installed and configured. ?Insure latest firmware and drivers are installed and the same on both servers for both RAID controllers. You can get these from the Server RAID CD. Once that requirement is complete power off both servers and the array.

?Each array eg., Array A, Array B etc can only contain one MS partition for shared storage. Example a configuration of 4 shared drives Q:\ R:\ S:\ T:\ that is four drive letters. This requires 4 separate RAID Arrays minimum 4 RAID1 arrays in the external enclosure. Array array A=Q:\ array B=R:\ array C=S:\ array D=T:\ this is only for the disk space that will be shared by the cluster.

a. Attach SCSI cables
b. Power on array
c. Power on server A
d. Boot to SERVER RAID CDROM
e. Create the RAID array’s on server 1. Right Click on the 6M controller card and select clustering actions select configure for clustering. Configure
Controller name Server 1 name, Partner name Server 2 name
Initiator ID’s can be 6 or 7 server 1 make 6. Server 2 will be 7 ** Note** Both channels should not matter but just to cover bases.
Merger group information select shared and give each server a unique merge group number Server 1 will be 1, server 2 will be 2
Select OK
Right Click on the controller card and select clustering actions select view shared drives. Select radio button “View shared Drives” add each drive one at a time eg., Channel 1 Drive 0, then channel 1 drive 1 so on until all physical disks are added
f. With the controller highlighted select tasks from the top menu bar. Select the security tab. Double click on the admin ID is fine add a password ? theses are case sensitive
?While shutting down and bringing online servers the disks in the array may act erratic. Simply continue after running the ipshahto.exe where requested the disks should be normalized.
Shut down server 1 and repeat steps a thru f on server 2 excluding creation of the raid array in step e.
Reboot server 2 and boot into the OS. Logon and load the Server raid CD from the CDROM locate the windows 2003 Cluster directory and copy it to the server.
Drill to the cluster directory Cluster\support and run ipshahto.exe this is the IBM cluster hostile disk takeover program.
Using Microsoft disk manager utility create logical drives Quorum and others listed in build doc.
Reboot server verify disks are ok.
Shutdown server 2
Boot server 1 and logon. Drill to the cluster directory Cluster\support and run ipshahto.exe this should take over the disks in the array. If the drive letters and descriptions do not carry over apply the same drive letters that were created on Server 2.
Reboot server and verify disks are ok.
Drill to the cluster directory Cluster and run setup select create new cluster option. This will call the MS cluster setup program and will also run some IBM cluster operations post cluster install.
Power on Server 2 and drill to the cluster directory Cluster and run setup select the Join an existing cluster option.
Launch the server raid manager program on one of the cluster nodes. Select the 6M controller and right click select server raid actions and select “validate cluster” this will run a simple test between the servers raid cards that are in cluster mode.
 
Forgot to mention this part:
There will be some errors during the MS Cluster install that state the shared drives you created are not cluster capable. This is OK the IBM tools that run post install will create a shared quorum and shared disks. You will be prompted to select a quorum drive during the ibm cluster install. The cluster resources that are created will be in their own resource groups eg., IPSHA Disk Group Q and so on. Move the quorum disk resource into the cluster group and delete the empty group. Depending on the use of the other disks combine them or leave separate (consult platform engineer)
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top