Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

iscsi port binding

Status
Not open for further replies.

DgtlLgk

Technical User
May 3, 2004
52
CY
We decide to implement some virtualization at the office
We bought essential plus kit and 2 storage devices(IBM DS3512 and EMC clarion AX4)
After we setup the VC server 5 and the rest of the 3 hosts , we configured the storages
For the IBM we bought 2(one for each controller) extra iscsi controllers with 8 ports total

I've setup a small testing network with both storages included,and one host with VC server and a virtual machine.on my testing network i've managed to connect to me storages through iscsi.Now for the real scenario we need to setup iscsi on different network.Lets say that our users use 10.1.1.0/24-10.1.10.0/24(classles network if i remember).We want to work with iscsi on 10.1.9.0/24, and the management ports of the storages on 10.1.1.0/24.Then we add some other VM hosts on 10.1.1.0/24 also.Now when i install the iscsi software initiator on the hosts through vsphere client i have problems to set then up correctly.

this is what i do(iam new to the subject btw)
under configuration ,i go to networking i create a new vswitch and i assign 2 network cards in it(vmnic).i create a new vmkernel port and i give it an address of 10.1.9.0/24 address with which i cannot ping my 10.1.9.0/24 iscsi.however if i give it a 10.1.1.0/2 address i can ping them through ssh from my esxi host.Now in properties the iscsi port binding is disabled.For vmkernel port in nic team i have one nic as active and the other as standby.I've also tried as both to be active and still port binding is disabled
Load balancing is on with route based on the originating virtual port id

where are my mistakes??....any further suggestions would be appriciated again :)


 
ok..........i think i've found my main mistakes
1.on the vswitch i already had a second vm from another network
2.it seems like i have routing problems


lets say now that on my servers i have 2 physical nics.i configure one for the main network 10.1.1.0/24 vswitch and the other i assign it on the other vswitch network for iscsi(10.1.9.0/24).the second one i connect it physically on a seperate switch with vlans capabilities.when i declare that the port(of my physical switch) will belong to vlan 9 i lost communication with the esxi host.the other one is connected to another switch.
.....for the admins,i think now this became a routing thread instead of VM.
 
When you declared the port on your physical switch (assuming LAN, you didn't quite specify that detail) was VLAN 9, did you configure the ESXi host to also use VLAN 9 for the management network?



=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
Brent Schmidt Senior Network Engineer
Keep IT Simple[/color red] Novell Platinum Partner Microsoft Gold Partner
VMWare Enterprise Partner Citrix Gold Partner
 
my server has 2 physical eth.one belongs to 10.1.1.0(physically and virtually ,also mngment port belongs here)
the other physical port is on another physical switch for 10.1.9.0(i created a new vswith with a vmkernel port that belongs to 10.1.9.0)
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top