Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

VMware SAN performance

Status
Not open for further replies.

joepc

MIS
Jul 26, 2002
647
0
0
US
I have a blade server with a few blades that all share one built in SAN. I have diced it up into several LUNs and assigned a LUN to each blade. One of the VM's is going to be an Terminal Server that all the users will use.

Since all the VM's are going to sit on the same SAN I would like them to communicate via the SAN backplane rather than the 1gbps network connection. If I can accomplish this the Termianl Server should be able to communicate with all the other servers faster.

Is this possible to do? Since I'm in the Virtual world the servers probably don't know they all sit on the same SAN (assuming) so they are probably going to communicate via the slower network connection.

Can anyone give me any adivse on perhaps the best way to set this up from a performance perspective. SAN is only a 5 disk RAID 5 array carved up into several LUNs. I have not fully setup the blade so I can reconfigure the entire thing if I need to.

We are using VMware 4.1 Essentials plus
 
You can't run network traffic over a SAN fiber channel network. The SAN uses the fiber channel protocol to transfer data, not the Ethernet standard. The fiber channel protocol doesn't support TCP/IP being transmitted over it.

You'll have to have the network data transfer over the Ethernet network.

Denny
MVP
MCSA (2003) / MCDBA (SQL 2000)
MCTS (SQL 2005 / SQL 2005 BI / SQL 2008 DBA / SQL 2008 DBD / SQL 2008 BI / MWSS 3.0: Configuration / MOSS 2007: Configuration)
MCITP (SQL 2005 DBA / SQL 2008 DBA / SQL 2005 DBD / SQL 2008 DBD / SQL 2005 BI / SQL 2008 BI)

My Blog
 
What about VMCI? I don't know much about it but I thought it was there to allow VM's on the same host communicate with each other at very high speeds eliminating the network layer.



Biglebowskis Razor - with all things being equal if you still can't find the answer have a shave and go down the pub.
 
If the VMs are on the same vSwitch and the same host then the network traffic will just go through the virtual switch, but it will still be going through the network part of the hypervisor. It won't go through the fiber channel at all.

Denny
MVP
MCSA (2003) / MCDBA (SQL 2000)
MCTS (SQL 2005 / SQL 2005 BI / SQL 2008 DBA / SQL 2008 DBD / SQL 2008 BI / MWSS 3.0: Configuration / MOSS 2007: Configuration)
MCITP (SQL 2005 DBA / SQL 2008 DBA / SQL 2005 DBD / SQL 2008 DBD / SQL 2005 BI / SQL 2008 BI)

My Blog
 
Mrdenny, will that still bottle neck me at 1gbps?

Thanks guys!
 
In theory no. I'm not sure if the VMware host would bottleneck the vSwitch and vNICs to 1 Gig. I've never tried to push that much data through a single VM vNIC before.

Denny
MVP
MCSA (2003) / MCDBA (SQL 2000)
MCTS (SQL 2005 / SQL 2005 BI / SQL 2008 DBA / SQL 2008 DBD / SQL 2008 BI / MWSS 3.0: Configuration / MOSS 2007: Configuration)
MCITP (SQL 2005 DBA / SQL 2008 DBA / SQL 2005 DBD / SQL 2008 DBD / SQL 2005 BI / SQL 2008 BI)

My Blog
 
Quote - mrdenny "If the VMs are on the same vSwitch and the same host then the network traffic will just go through the virtual switch, but it will still be going through the network part of the hypervisor. It won't go through the fiber channel at all.
Denny" End Quote

This is not necessarily true.

In a scenario where both the source and destination VMs are on the same vSwitch, same port group and VLAN:

VM1 is connected to vSwitch1, Port Group A and VM2 is connected to vSwitch1, Port Group A. In this example the VMs are plugged into the same vSwitch and the same port group on the same host server. Network traffic between VM1 and VM2 never leaves the host server and does not go to the physical NICs on the host server and thus never travels on the physical network.

However in a scenario where both the source and destination VMs are on the same vSwitch but different port groups and VLANs:

VM1 is connected to vSwitch1, Port Group A. VM2 is connected to vSwitch1, Port Group B. In this example the VMs are plugged into the same vSwitch on the same host server. Network traffic between VM1 and VM2 goes from a physical NIC on vSwitch1 to a physical switch that it is connected to and then back to a physical NIC on vSwitch1 and then to VM2.
 
I am confused here. joepc, what is the mode of connectivity to the SAN (NFS,iSCSI or Fiber Channel)? Next, how many hosts do you intend to run and how many VM's do you intend to run and most important is this to be used in production. Also, if you are presenting one LUN to each blade, when you lose a blade, you lose all the VM's on that blade. there are WAY better ways to do this.
 
I have another three weeks to prep the blade and I will then put it into production. I believe the SAN is NFS but I'll have to double check.

Right now I have a LUN for each blade. Each LUN/Blade is running ESXi so 3 hosts. Most likely going to be running aroun 4-6 VMS, DC, Exchange 2010, couple of app servers, File and printer, and one or two Terminal Servers.

I can reconfigure the whole thing if I want to. I'm just playing around with it now. So if there are WAY better ways to configure it I'm open to suggestions.

Should I create one Giant LUN and assign it to all the blades? Then use HA in case a blade goes down?

NOTE: I only have VMware 4.1 essentials plus.
 
cabraun,
True I didn't account for seperate vLANs. I was assuming that both VMs would be on the same vLAN if they were on the same vSwitch (I setup my vSwitchs at one per vLAN).

Denny
MVP
MCSA (2003) / MCDBA (SQL 2000)
MCTS (SQL 2005 / SQL 2005 BI / SQL 2008 DBA / SQL 2008 DBD / SQL 2008 BI / MWSS 3.0: Configuration / MOSS 2007: Configuration)
MCITP (SQL 2005 DBA / SQL 2008 DBA / SQL 2005 DBD / SQL 2008 DBD / SQL 2005 BI / SQL 2008 BI)

My Blog
 
I would suggest first that you dedicate a VLAN to your storage that is unroutable so that only members of a particular VLAN can attach to it and keep all extraneous broadcasts off your storage VLAN. Next dedicate two NIC's just for a VMKERNEL port (depending on which blade manufacturer, this can be done in a variety of ways). Create one NFS mount (you don't have that many VM's and you seem to be traversing the same hardware so having multiple mount points isn't going to make a difference. All ESX hosts should point to this one NFS Mount and that is where you store all your VM's. this will allow you to utilize vMotion and HA
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top