Clustersing seems a bit complex. I keep hearing horror stories about getting apps to run correctly in a cluster environment Are there other options? What about virtual operating systems running on more than 1 physical box?
Getting apps to run in a clustered enviroment isn't that tough, if you know what steps need to be taken, and depending on the application.
Many applications aren't cluster aware, so they are much more difficult.
The big VM systems (VMWare, MS Virtual Server) don't allow you to run a VM machine which crosses between physical servers. With VMWare (and I'm sure MS Virtual Server) you can setup clusters of VMWare servers and have HA, where if one host machine dies, all the client machines which were running on that host machine are moved to another machine and started. However this is very complex to setup (we just finished setting up a 6 server VMWare installation here at the office).
What application are you thinking about clustering?
Not real sure what apps. I'm just investigating what's possible for non-cluster aware solutions. I'm not even sure if it's only HA they are looking for or something more. A buddy said he saw a company called Marathon Tchnologies at a recent show but, he didn't have any more info than a link. Another friend mentioned one called SteelEye (not sure if that name is right) but, again didn't have any real info.
Has anyone used any of these alternatives to clusters and if so, how hard are they to implement/maintain?
Googleing this subject is tough because there is soooo much out there and so many terms you could use.
The big VM systems (VMWare, MS Virtual Server) don't allow you to run a VM machine which crosses between physical servers"
?????
VMware ESX
Vmotion - allows you to move VMs in real time (no downtime) between multiple physical servers.
VMware HA - High Availability, will auto start a VM from a failed physical box. This is not true HA like MSClustering as there is downtime (time it takes to restart the VM, usually 30 Seconds).
You can build a Microsoft Cluster in Vmware using a combination of VM/VM cluster
or VM/Physical cluster.
What he's asking about is akin to a beowulf cluster where a "single" OS is run against more than one physical hardware. VMotion, VMWare HA and MSCS don't fit that requirement.
I know that there are some for the linux world, probably not for the Windows world. I'm not even sure how something like that would react to a hardware failure. Based on the info that you have provided I'd recommend using a Microsoft Cluster. Getting it setup shouldn't be that hard.
The basic requirment to clusting something is that it must run as a service. To do the setup follow this basic idea.
Setup a seperate cluster group for the service. Give it a shared drive on the SAN and a name and IP. Bring the group online on one of the machines. Install the software using the shared drive as the location of the service. When the install is done move the storage group to the other machine and reinstall the software to the same path.
Now add the service to the resource group with the type of "Generic Service".
This works best with software that keeps all it's settings in a file in the install folder, but it can also be done with services that use reg keys. You'll see an option for how to replicate reg keys from one machine to the other.
I am going to post this in here are the subject seems to be correct for my question although the content so far is not exactly what I am looking for. In addition, I can find no other info on the site about this particular question. With that . . . here is the question. I would like to set up some redundancy or one of our Web servers. I thought clustering would be the solution but the shared disk requirement is prohibitive . . . also, I don't want the automatic failover. Ok so here is what I am looking for, is there a way to Mirror a server such that one server is on the public network and a "standby server" is connected to it such that the standby is an EXACT copy of the production server through the use of periodic updates through a "private network" connection via a second set of nics. Where if the production server fails all you would need to do is pull the network cable from the production server "public nic" and put it into the "standby server" "public nic" and effective have the exact copy of the production server up and running at the instant the cable clicks into to the nic?
I hope this isn't confusing and I can clarify for anyone who wants it. Any help would be greatly appreciated. Of course, I am talking about a Windows 2003 server environment. In addition the role of the server I will be replicating will be web services on IIS.
I would really appreciate any help. I swear I have seen it done before but i can't for the life of me find the reference and i fear I am crossing Unix knowledge with NT.
You setup two machines, and use some method to replicate the data between them. You can use DFS, scheduled robocopy commands, etc.
You then setup Load Ballancing between the machines so that both are active. Then if either machine goes down the other machines goes from handling 50% of the requests to 100% of the requests. You can either use a hardware loadballancer or the build it Windows Load Ballancing Service (LBS).
Actually, One of the requirements is that the standby machine not be on the public network. That way if the production machine get's hacked and we set it so that the standby has a replica of the production box as of say 5hrs ago then we can pop that one on the network while we do forensics on the hacked box. Would it be possible to do load balancing on two nodes with both of the virtual IP's set the same but only have one of them attached to the public network such that if the production box got hacked you could unplug in and plug in your backup box thus ensuring very little down time and taking the production box off line for forensics? Would you set up a second NIC in each box to do the NLB replication since the standby box will not be on the network? Or does NLB even replicate anything? Is that where DFS comes in for replicating the data?
There is actually an alternative to clusters that addresses the issues and shortcomings mentioned throughout this thread. It treats two physical servers as a single server so you only have one operating environemt to manage and one application to install, license, and manage. All fault management is automatic so there's no configuration for failover and failback. Failures are transparent and don't have any impact on the application; no failover, no restarts, no downtime.
"Is the web server being hacked a problem? If so better firewall and intrusion detection may be in order."
Not at all really. It is a requirement of the Web Design group that we are supporting that they systems have about a 5hr delay in their content so that if something like that happens, hacking, bad windows update etc. that they have a box to be able to bring up that is about 5hrs behind. Personally i think it is stupid as we have a firewall and we thoroughly test our updated and have never had a problem but I don't make that decision and ultimately have to support the client. If it where me I would do NLB and be done with it.
Thanks for all the help though. I know have a couple ideas to do some lab on.
The whole idea of clustering is redundant hardware, I don't see where virtual machines on the same physical box accomplishes this.
Clustering in the MSCS sense means redundant hardware (hosts) and shared storage. You could go futher with a replicated storage design; some applications even provide this capability at the application level (oracle, SLQ, Exchange 2007, Domino, etc). This provides redundancy for the stored data as well. A hybrid solution would be a geographically dispersed cluster, whech is a set of vendor specific replication extentions integrated into MSCS to provide replicated instead of shared storage.
If you're going to go that far, why not do away with clustering altogether and just replicate? Mirror the boot luns as well as the data (your replication tool will need to be application aware and within the MS support boundaries), then in the event of failre power up the dr hosts connected to the replication destination volumes.
If I am interpreting mkjj123's product correctly, you install their everRun product on 2 seperate physical servers and it somehow serves up a third OS that's virtual. On this 3rd virtual OS is where I would install my applications without any special considerations. If one physical server has a melt down, the virtual OS doesn't get effected and my application just continues to chug along.
This seems like a very cool solution!
Has anyone else at least looked into this or better, have any experience with it?
Update: I now have Exchange 2003 running on a Marathon everRun FT virtual operating system and it works great! It's running on (hosted by?) 2 identical Dell Poweredge servers. Installing the software was pretty easy and no hicups to report so far. If you haven't looked into this product yet, it's worth checking it out!
Thanks again to all of you that took part in this post.
It is a requirement of the Web Design group that we are supporting that they systems have about a 5hr delay in their content so that if something like that happens, hacking, bad windows update etc. that they have a box to be able to bring up that is about 5hrs behind"
cdknill,
Given what you stated above, I wouldn't recommend realtime replication or even a short automated replicated interval between servers. What if they got "hacked" on the production box and had content destroyed right before a replication interval and then had it replicated to the standby server. What if the compromised server was also backed up to tape as well. You would have no rollback (in this scenario).
The only thing that comes to mind is a manual replication routine where a batch file is "kicked off" manually by an admin, say right before any content updates to the production server. Gives you a snapshot.
Does this company have a QA server where the content gets tested first? This could your rollback content as well, depending on the environment. My last company had a staging area where content was tested before it was pushed out to the production servers.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.