Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

ESXi, and Memory/CPU only servers 1

Status
Not open for further replies.

snootalope

IS-IT--Management
Jun 28, 2001
1,706
US
Our company is just starting to scratch the service of what vm's can do for us. We've had a few run through ESXi 3 update 2 and have a pretty good idea of what we'll need as far as storage and servers. However, I'm a bit confused on how exactly one part works.

We decided we'd need four "front-end" servers running ESXi 3 that would have nothing but dual cpu's and 16GB RAM each. The part that gets me, is what happens when one of these front end servers load a vmdk file from our storage and something inadvertently takes down that particular front end machine? Say the machine is running our Exchange server as well as a Terminal Server, how does the VM safely unload? Would the VMDK file suddenly become corrupt?

In my experience with the free vm's, one of my test has been simply pulling the power when it's running. No surprise, the VM is rendered pretty much useless as it didn't unload properly and thinks it's still running. (if there's a fix for this, i don't know it).

Anyway, I'm just worrying about how these RAM packed ESXi are going to handle our server images in the event one of the servers crash. What would the status be of the actual server file? is it just stuck at that point in time and waits to be loaded up again?

Thanks for any advice..
 
Well, the assumption is you'll run the ESX in a cluster on some sort of SAN. Each physical machine will connect to the SAN and each one will see all the LUNs. If a physical machine dies your other ESX machines will pick up those virtual machines and your users won't know anything happened other than a slight "network" glitch of a couple of seconds. You should be running a production environment using a paid version of ESX for several reasons. One you get 24x7 support, saved my butt a couple of times. Two you get HA and DRS, high availability and distributed resources, two things you do not get with ESXi. DRS and HA work hand in hand as the portion of VMWare that moves your guest machines from one physical to another if one is being over taxed or in a DR situation where you have a physical fault.

Buy 4 machines with a SAN and the paid version is costly up front but in a production setting you WILL need it. I wouldn't even consider using ESXi in production unless it is ONLY as a DR, last resort.

For the record forget 16 gig of ram, get 32, the cost upgrade is worth it and you'll need it. Get dual quad core processors, your guest machines will be able to see each of the cores as a single processor. Either boot from your SAN or get three 36 gig drives as your "OS" drives in a RAID 5, the install for ESX is only like 800mb.

I have two Dell 2950s, dual quad core processors, 32 gigs of ram, Dell/EMC 6tb SAN. We are booting from our SAN so our 2950s have no drives. I am running 31 VM's and each physical machine is at about 40 to 50% utilization which is good if one fails, the other will pick it up and no one is the wiser except my boss and I. We are almost out of drive space so we are adding another enclosure to our SAN and one more server. We'll probably add another 10 VM's once we are done with room to grow.

Went from 7 racks of servers down to about 2 and a half. Once we buy our other enclosure and server, I'll be down to one rack.

My suggestions are what work for my environment so yours might be slightly different but at least you have an idea of what can be done. I have heard of people running as many as 50 or more VMs per physical server.

Cheers
Rob

The answer is always "PEBKAC!
 
Greatly appreciate the info. We're a smaller shop, only about two racks of servers. So, I'm somewhat struggling to justify a $60k storage solution at the moment, although I know what the benefits are, that's a lot of coin for a small shop.

I just talked to our vendor and he said EMC just came out with a new NX4 or NS4 - haven't even looked for it yet but I think it's one of those, and it's aimed at smaller companies.

Do you do replication off your EMC box to another site/storage box?

We're looking at the Infrastructure 3 Enterprise version for six processors. Three boxes, two processor's a piece, quad-core's. We might make it the eight CPU, haven't decided yet.
 
VMware prices their software at 2 CPU's per license, for the cost of going to 4 processors from 2 is quite a jump.

We have a 6TB direct SAS attach storage from Dell, it is SAS drives (much much cheaper, think we paid like $3000 or something for it) and using Vizioncore VRanger and it works pretty good for back up. VRanger snapshots the VMDK at the ESX level (your VMs know nothing is going on) and dumps the file across our network (weekly then differentials) and then we pick it up to tape for off site.

We spent about $90k on our set up and with Dell we got them to throw in three training classes (my boss and I did the Install and Config and then my boss did the Security) and if you have a budget and can wait until November Dell always has some really good deals and really looking for end of year sales. We're holding off until then to buy our second enclosure and another server.

Cheers
Rob

The answer is always "PEBKAC!
 
I forgot to mention, if you can get, at the very least, the Install and Config training, do it!!! It is so worth it, lots of good information. Also if you do it, do it like the week or two before you set up your infrastructure. There are some tricks you'll need, and will find out the hard way, if you don't take the class and if you get everything set up to find out you got it wrong.

My lesson learned the hard way(thankfully on a trial, non-production install) the default install only allows a max disk size of 256 gigs because the block size is 1mb, make it 8mb and you can have 2TB size disks.

Cheers
Rob

The answer is always "PEBKAC!
 
So you have direct attached storage then.. So you're storage server plugs right into a gb switch along with your ESX servers? Are all your network shares, like your users home directories, roaming profiles, shares, etc, sitting on the same direct storage as well?

The performance is just fine with your setup?

How many SAS drives do you have in your DAS? RAID 5+10?

Do you use the VMotion between your two ESX servers?

So on a normal day, how many VM's do you have running on each ESX?

Also, what did you do with your mail server, did you VM it as well? We have Exchange 2003 and have been a bit nervous about vm'ing the thing..
 
We have only our back ups on the DAS, it has fifteen 500 gig drives; RAID 5 + 1 hot swap. It is direct attached via SAS to a Dell PE850 on the same gigabit switch as our ESX servers. So far we aren't seeing anything that would make us move things around. Our performance is pretty decent.

I am using VMotion, HA and DRS. Just set that portion of it up a couple of weeks ago. I've done a pretty good job of balancing out our VMs so we really haven't had a need for DRS yet. VMotion rocks if you need to reboot your physical servers, which I had to do last week to configure the BIOS to allow 64 bit VMs. Also a lessoned learned in the VM training class, Intel processors need a setting flipped in the BIOS to allow 64 bit VMs. There is something called Virtual Technology that needs to be turned on.

I have 15 VMs running in full production on one ESX server and 16 on my other. I have many different operating systems, a few 2008 SQL DB servers, three Linux servers for our relay mail, a couple of XP machines, a couple of Win 200, and a dozen 2003 machines including web servers.

We outsource our email through USA.net. Before I got hired on a few years ago the old IT guy was an idiot and basically useless so the decision was made to outsource the email and I am no hurry to bring it back in house. With 35 users (half using Blackberry) and two of us on the IT staff, we arent pushing the issue.

I will say I know many VMWare users who are running Exchange on a VM and have no problems with it. I know people running DC's, SQL, Sybase, Oracle, you name it ... after all that I have seen I wouldn't be afraid to run anything in a virtual environment. I am sure there is something out there but I can't see where anything wont run (and sometimes run better) in a virtual world.

For instance, running a web server that connects to a DB server. Two physical machines, you're connecting across a LAN, usually gigabit but it could be 10/100. In a virtual world you can run those two on the same physical machine and connect across the bus (for lack of a detailed explanation) so it is a much faster connection.

You should look into joining a VMUG in your area and you can talk with actual VM users in your area about their experiences. Our local group is pretty awesome and LOTS of useful information. We usually get a vendor to sponsor the meeting, they come in tell us about their products (bring in the freebies and/or prizes) then we get together and have kind of a round table discussion of problems or questions we've run into.

Cheers
Rob

The answer is always "PEBKAC!
 
So, since you're using DAS, your ESX servers don't require hba's then right since it's just a network share? Or is that not right?

Also, what are you using for a db for VirtualCenter server? Beings we're such a small shop, I was kind of thinking we could get away with SQL 2005 Express, but beings I haven't work with VC yet, I really don't know what to expect as far as disk/db consumption.

Do you have different subnets or vlans for your vmkernal, console, and storage? Or do you combine them all on one subnet/vlan?
 
Sorry I am just getting back around to this, was out of the office last week.

We do have HBA's on our servers to connect our SAN to the servers. Each server has a dual QLogic 4 gig HBA card.

As far as a DB for our VC server, we are already running SQL for some of our number crunching so it wasnt a big deal but you could get away with SQL 2005 Express as a DB and it would work just fine. In fact you could, although it isn't recommended, make your VC server a virtual machine. Our VC server is a Dell PE 850, one processor, 2 gig of ram, and two 73 gig drives mirrored. It is also our license server as well.

We did VLAN off our 48 port switch. We have our internal network, our DMZ, our VM network for Vmotion, and we created an out of bound network for our remote access cards on the physical servers. When our budget allows we'll actually tighten that up with less VLAN and more physical switches allowing us some growth and a little more security.

The DAS is part of our internal network. Each server has a dual on board NIC and an Intel PCIe quad NIC plus a RAC for remote "physical" access. 2 for our internal network, 2 for DMZ, two for VM and then the RAC is on the OOB network.

Hope this helps.

Cheers
Rob

The answer is always "PEBKAC!
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top