Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Boot from san on Clustered exchange 2003 server

Status
Not open for further replies.

jaksen112

Programmer
Jan 25, 2002
365
0
0
US
have been migrating the majority of our production servers to boot from san, all has gone pretty well however am nervous on attempting our clustered exchange srvs.

I've read MS supports such a scenario however the logistics of it have me wary. MS states it is possible to use a single hba for both boot & shared cluster disk access but I cant understnad how this could be zoned properly. My thoughts were to simply bring up a boot from san server and join to the cluster, anyone ever run through somethng like this??

01110000
 
If you are clustered, why would you need to boot from the SAN?

Also, booting the OS and having the exchange data down the same path is very common. Zoning is simple as you are allowing access from the HBA to the Disk Port.
 
want to boot from san for disaster recovery purposes, once on san boot would setup a/sync mirror of boot lun to DR site.

01110000
 
I've done this in the past, with 4 HBAs per cluster member though - 2 for the boot path, and 2 for the data - with boot and data zoned away from each other.
Member_1 has it's own boot device on the SAN, and Member_2 has it's own boot device. All data disk is assigned to all 4 data HBAs (2 on Member_1 & 2 on Member_2) and the cluster service sorts out the access.
I'm sure I've seen white papers from either EMC or MS on this exact solution, but not sure where.
 
following the below MS article I've got the server using a single HBA for both boot&data luns, seems to be working ok but I havent tried a reboot yet, have only failed services over, fingers crossed.



"In Windows Server 2003 Cluster server has a switch that when enabled, allows any disk on the system, regardless of the bus it is on, to be eligible as a cluster-managed disk. Using this, the system disk, boot disk, pagefile disks and any cluster managed disks can be attached to the same HBA. This feature is enabled by setting the following registry key:

HKLM\SYSTEM\CurrentControlSet\Services\ClusSvc\Parameters\ManageDisks
OnSystemBuses 0x01"

01110000
 
We do a lot of clustering and exclusively boot from SAN (for several reasons - DR being #1). All works great and we have all the redundancy we need by having 2 HBA's per server.

We dedicate one HBA per server for boot (each server connected to a separate fabric).
Cluster1A_1 - Fabric A
Cluster1B_1 - Fabric B, etc.

We dedicate the other HBA for DATA (each server connected to a separate fabric).
Cluster1A_2 - Fabric A
Cluster1B_2 - Fabric B, etc.

We typically set up the first adapter to boot from san and install the os. Once the OS is installed then we assign the second HBA with all of the Data drives - (ensuring they are basic disks. Our administrators love Dynamic drives and they tend not to play well in clustered environments.)

This way, data and boot traffic are isolated and we have redundancy in case we loose any HBA, SAN Port, Server Hardware, etc. (The only thing we are not protected against is data corruption, but clustering is not designed to protect you against this.)
 
Question, Have you tested redundancy/failover on the boot hba's? for example if you pull the plug on the active boot hba, does the secondary pick right up without windows skipping a beat? if so I would love to know how you got it working because I'm currently not utilizing failover on boot hba's, never tested to work.



01110000
 
and are you using Mirrorview a/s? I've wondered what the implications might be trying to mirror the logs/quorum drives accross and then attempt to bring them up, possibly out of sync as far as transactions go etc..

01110000
 
Yes, we have tested the failover of both the boot drive as well as data drives in this configuration.

The boot drive is not actually clustered, so only the shared data drives fail over to the other node and the application resumes normally.

We cluster File, Print, SQL, FTP & IBM MQ applications (20 Clustered servers) using this configuration and it has saved us on more than one occasion.

As far as Mirrorview goes, we currently don't use that technology so I'll let someone else speak to that. But EMC did has stated to us that we wouldn't have problems using SRDF in a DMX-3 (not quite the same)...
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top