Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Help with allocation drives for VM servers

Status
Not open for further replies.

apc1234

Technical User
Jun 19, 2009
17
US
Hello all,

I have 2 physical hp dl385g7 servers with the following specifications:

2x6core Opterons
8x146GB drives
48GB RAM

The purpose of the servers are:

Server 1 is going to hold ~5-8 Domain Controllers
Server 2 is going to hold ~5-8 Web Servers

On both the servers the drives have been set up as:

Disks 1 and 2 (136GB) --> RAID 1+0 (holds ESX install)
Disks 3 - 5 (273GB) --> RAID 5 (will be for DC installs)
Disks 6 - 8 (273GB) --> RAID 5 (will be the data drive)

What I was thinking of doing is to create all 5 servers on 5 partitions off of the RAID array.

So, my question is should I leave the configuration as is and continue setting up the servers, or

Is there a better way to allocate resources for the servers?

Thank you.
 
I set mine up in raid6 and they are of similar hardware configuration as yours. If you let VMWare partition the array it will use about 7-8gig for VMWare OS and leave you just over 800gig for the vmfs partition where you can store guests. You will end up with more guest usable disk space in this configuration and still have solid redundancy. Plus it gives you more contiguous disk space than 2 raid5 arrays. Lots of ways to skin this cat, I am sure you will get more opinions.

RoadKi11

"This apparent fear reaction is typical, rather than try to solve technical problems technically, policy solutions are often chosen." - Fred Cohen
 
In the next rev of ESX, there will only be ESXi. The installable ESXi only takes up about 700MB of disk space. I like to use a USB flash disk to hold the ESXi OS leaving me with 100% storage space for VMs. Typically you find a USB port inside of the server to do this on newer servers, that way you don't have a USB key sticking out the back (or front) of the server.

5 servers on 5 partitions throws me off a bit on what your doing.

Keep in mind, you can only create one VMFS partition per logical disk presented to the host. Also, for some time there was a rumor floating around that you get better performance if you create one logical disk per virtual machine. It was also rumored that this was best practice. Both rumors are actually true BUT, the performance increase gained by dedicating a logical disk to a virtual machine is minimal. It is a very small increase. VMWare never documented this setup as best practice, it was Microsoft that documented this as best practice. Microsoft only documented it has best practice because Hyper-V was limited to this kind of setup. Now Microsoft has changed their best practice to multiple virtual machines per logical disk, because now Hyper-V is capable of doing multiple virtual machines on a single logical disk.

Do look into shared storage so you can eventually take advantage of vMotion. There are lots of inexpensive SAN solutions out there (Coraid being one of them) that will give you faster disk throughput, and leverage disaster recovery features. Yes, it means you would have to buy the expensive vCenter license. Look at the Essentials Plus license package. It's at a pretty competitive cost, and with 4.1, VMWare gives you vMotion now. They also run some pretty good deals from time to time, last one I saw go through was a free 10 seat license of View with an Essentials Plus purchase. So be sure to ask your var what the current offerings are.

=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
Brent Schmidt Senior Network Engineer
Keep IT Simple[/color red] Novell Platinum Partner Microsoft Gold Partner
VMWare Enterprise Partner Citrix Gold Partner
 
Thats pretty bizzare having 5 dc's on one virtual server, apart from having 2 dc's for failover (which is pointless virtualised unless there's multiple hosts) 1 DC can support hundreds of thousands of hosts if the vm has enough memory to cache the entire Ad database.
 
Hello all,

Thanks for the responses. As per your responses and suggestions by some other folks, I have setup the disks (RAID 5) as one large datastore on each host. I now install the DC's and web servers onto these datastores.

Provogeek - I did try to sell the concept of using our SAN for shared storage to avail of the nifty features, but got turned down in favor of using the internal drives on each of these machines.

theravager - I have 5 DC's on one virtual server because each DC is for a separate client and each will be on a different network (I have NIC's assigned to each guest) and I have another host with the second set of 5 DC's at our co-lo. Basically, I have a mirror setup using two hosts (trying to eliminate single point of failure).
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top