Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Seperating Storage, VMotion, Console, etc

Status
Not open for further replies.

snootalope

IS-IT--Management
Jun 28, 2001
1,706
US
Curious how others are separating their VM traffic. Like keeping the storage traffic (SAN) separate from the vmotion and normal LAN traffic between actual VM's.

Are you using VLAN's to keep each separate? Or, do you just use a different subnet for each and just keep them all on the same vlan?

Or further yet, do you use actual separate switches (like the book says to do)?

Just looking for info on what others have done and found to work just fine... thanks for sharing any info!
 
Running 6 Node HA and DRS cluster

My SAN is fibre so no issue there.
Vmotion: I have a seperate 8 port GB switch for private network Vmotion with the VMKernel between Hosts. I also have this set up for redundancy on the public network as well (another VMKernel on the public traffic; in case the 8 port goes down)

I have 4port GB NICS installed to allow granularity for my VM's. It allows you to spread the network across multiple switches for redundancy as well. Or even specify a specific port for a certain subnet. VLans can be used for even more control.

If you have VM's that only need to talk to each other, you can create virtual switches to keep the traffic internal and never hit your switches.



________________________________________
Achieving a perception of high intelligence level can only be limited by your manipulation skills of the Google algorithm!
 
hi,
to use different switches, you need different nic, and you don't need VLAN (it's easier). If you have not enough physical card (ports to be exact), if your switch supports VLAN, use them.
ciao
vittorio
 
This is what I do:
I limit my cluster to 6 nodes. I have found that more than 6 nodes attached to an FA pair doing the kind of i/o that VMFS3 demands can overwhelm the pair (i am attaching to EMC DMX Storage all FC). Next, I use two NICS for the Service Console. Two NICs for VMotion (I have them plugged into my core but have setup one layer-2 vlan per cluster (VMotion should not route and this allows me to have the same IP addresses for all A nodes (same for all B nodes, etc) of different clusters which makes my kickstart scripts easier). I then use multiple NICS for my production vswitch. Now remember that adding multiple NICS to a vSwtich does not necessarily increase bandwidth. If you take 4 NICS to a vswitch, you can get a theoretical 4gbps outbound but only 1gbps inbound unless you employee ether-channel or the like (the switch has to be able to bond the NICS for inbound traffic).

Also remember that the best way to make sure that everything is running the same is to do scripted installations. There are many ways to do this. You should never build each ESX server individually from CD. It causes way too many problems when every server is different)
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top