Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations TouchToneTommy on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Why use more than one vswitch?

Status
Not open for further replies.

jamescpp

IS-IT--Management
Aug 29, 2001
70
US
We are using 4 physical NICs with VMware 5.0. We have one vswitch, with the switch ports configured in an MLT, with multiple vlans on the ports, using route based on ip hash. I am not a traffic expert. I'm being told that "best practice" is having a separate vswitch for management and another for vmotion traffic. The only reason I have found for this so far is one site mentioned that virtual machine traffic is "toxic" and shouldn't be mixed with "infrastructure traffic".

When using one vswitch I could see a problem if you vmotion many VMs that could saturate the bandwidth of all the ports of the MLT. Outside of that, what are the reasons for splitting it up?

At a high level, it seems like a better idea to have all the NICs teamed, so you don't have any single points of failure.

What are the downsides of doing this type of configuration?

Thanks
James
 
The vMotion best practice is not only a dedicated vSwitch, but it is also a dedicated physical switch. It is not just the broadcast address you want to isolate, it is also the backplane. Could you get away with just using a common switch for everything? yes, that's if network performance is not a priority. A vMotion does utilize a lot of the network, and it can interfere with production usage. That is why you want to split it up.

Your management network simply wants redundancy. Using one vSwitch with multiple uplinks will accomplish this. In small environments, it will work out just fine. In large environments though, it could pose a problem. Big servers have lots of VM's with lots of traffic. Best to just leave the VM traffic on it's own uplinks and use different uplinks for the management. It is not much of a problems (at least for me) anymore since VMWare introduced datastore heart-beating. So network congestion no longer cause false HA events to occur. So you can get away with using the same uplinks for management that you use for VM's if your environment is small (3 hosts or less, 50VM's or less).

With only 4 1GB NICs to work with, your options are limited. By keeping everything on a single vSwitch with common uplinks, you "should" limit any and all vMotion processes to after hours. For performance, I would suggest you take one of those uplinks and dedicate it to vMotion. Being able to move a guest between hosts and the users not having the slightest clue something is happening is VERY important. The moment your users notice lag, slight pauses, or momentary disconnects due to a slow vMotion, they will likely deem your network unreliable.

Personally, I would add a 2port Ethernet NIC to your servers to bring the total up to 6 1GB NICs. Then you can leave 4 ports to VM/Management traffic, and dedicate two uplinks to vMotion on their own vSwitch (two uplinks for vMotion can take a VM running 4GB of active memory and 6GHz of CPU from a 2min 30sec vmotion to a 45sec vMotion, it is damn fast). You could even add a 4 port NIC, use two dedicated to vMotion and the other two dedicated to Management.

=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
Brent Schmidt Senior Network Engineer
Keep IT Simple[/color red] Novell Platinum Partner Microsoft Gold Partner
VMWare Enterprise Partner Citrix Gold Partner
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top