Folks,
I have a question on the round-robin load balancing scheme on a Cisco
Local Director 416, running IOS v. 4.2.5.
I have set up 3 real machines:
10.13.1.82
10.13.1.83
10.13.1.140
Port 80 on all three are bound to port 80 on the virtual:
10.13.1.101
The load balancing scheme is set to round-robin.
When I stress test the system, I notice that initially the local
director seems to distribute the load evenly and over time (within a few
minutes) most, if not all, the requests are going to one machine only.
There are no timeouts, failures or threshold values that are reached.
Here are outputs for 'show real':
# show real
No Answer TCP Reset DataIn
Machine Connect State Thresh Reassigns Reassigns Conns
10.13.1.82:25:0:tcp 0 IS 8 0 0 0
10.13.1.140:25:0:tcp 0 IS 8 0 0 0
10.13.1.140:80:0:tcp 1 IS 8 0 0 1
10.13.1.82:80:0:tcp 27 IS 8 0 0 8
10.13.1.83:80:0:tcp 2 IS 8 0 0 2
10.13.1.83:25:0:tcp 0 IS 8 0 0 0
Here are the outputs for 'show statistics':
localdirector# sh statistics
Real Machine(s) Bytes Packets Connections
10.13.1.82:25:0:tcp 0 0 0
10.13.1.140:25:0:tcp 0 0 0
10.13.1.140:80:0:tcp 30264258 40427 479
10.13.1.82:80:0:tcp 23270854 31703 479
10.13.1.83:80:0:tcp 13703047 20270 480
10.13.1.83:25:0:tcp 0 0 0
Virtual Machine(s) Bytes Packets Connections
10.13.1.101:80:0:tcp 67238159 92400 1438
10.13.1.101:25:0:tcp 0 0 0
My application uses HTTP 1.1.
My application is sending 30 concurrent requests, which is consistent
with the 'show real' output (1+27+2 = 30).
When I check the web server logs of the real servers, one machine always
gets the majority of web requests. The local director does not seem
to employ true round robin as the load increases. This load is *not*
stressing the LD (cpu is 2%).
Here is the config:
localdirector# sh conf
: Saved
: LocalDirector 416 Version 4.2.5
: Uptime is 0 weeks, 0 days, 0 hours, 9 minutes, 40 seconds
syslog output 20.3
no syslog console
enable password 308271e7e208466462cb5a1dafc6a5 encrypted
hostname localdirector
no shutdown ethernet 0
no shutdown ethernet 1
no shutdown ethernet 2
interface ethernet 0 10baset
interface ethernet 1 auto
interface ethernet 2 auto
mtu 0 1500
mtu 1 1500
mtu 2 1500
multiring all
no secure 0
no secure 1
no secure 2
ping-allow 0
ping-allow 1
ping-allow 2
ip address 10.13.1.117 255.255.255.0
route 0.0.0.0 0.0.0.0 10.13.1.129 1
arp timeout 30
arp retries 15 4
arp gratuitous 60
no rip passive
rip version 1
failover ip address 0.0.0.0
no failover
failover hellotime 30
password 18c422e309cddd9efda0bde0fb08dcb5 encrypted
telnet 10.13.1.204 255.255.255.0
no snmp-server enable traps
snmp-server community public
no snmp-server contact
no snmp-server location
virtual 10.13.1.101:80:0:tcp is
virtual 10.13.1.101:25:0:tcp is
predictor 10.13.1.101:80:0:tcp roundrobin
predictor 10.13.1.101:25:0:tcp roundrobin
real 10.13.1.82:25:0:tcp is
real 10.13.1.140:25:0:tcp is
real 10.13.1.140:80:0:tcp is
real 10.13.1.82:80:0:tcp is
real 10.13.1.83:80:0:tcp is
real 10.13.1.83:25:0:tcp is
replicate interface 2
bind 10.13.1.101:80:0:tcp 10.13.1.140:80:0:tcp
bind 10.13.1.101:80:0:tcp 10.13.1.82:80:0:tcp
bind 10.13.1.101:80:0:tcp 10.13.1.83:80:0:tcp
bind 10.13.1.101:25:0:tcp 10.13.1.140:25:0:tcp
bind 10.13.1.101:25:0:tcp 10.13.1.82:25:0:tcp
bind 10.13.1.101:25:0:tcp 10.13.1.83:25:0:tcp
localdirector#
Can anyone shed some light on what might be happening?
Thanks,
Misk
I have a question on the round-robin load balancing scheme on a Cisco
Local Director 416, running IOS v. 4.2.5.
I have set up 3 real machines:
10.13.1.82
10.13.1.83
10.13.1.140
Port 80 on all three are bound to port 80 on the virtual:
10.13.1.101
The load balancing scheme is set to round-robin.
When I stress test the system, I notice that initially the local
director seems to distribute the load evenly and over time (within a few
minutes) most, if not all, the requests are going to one machine only.
There are no timeouts, failures or threshold values that are reached.
Here are outputs for 'show real':
# show real
No Answer TCP Reset DataIn
Machine Connect State Thresh Reassigns Reassigns Conns
10.13.1.82:25:0:tcp 0 IS 8 0 0 0
10.13.1.140:25:0:tcp 0 IS 8 0 0 0
10.13.1.140:80:0:tcp 1 IS 8 0 0 1
10.13.1.82:80:0:tcp 27 IS 8 0 0 8
10.13.1.83:80:0:tcp 2 IS 8 0 0 2
10.13.1.83:25:0:tcp 0 IS 8 0 0 0
Here are the outputs for 'show statistics':
localdirector# sh statistics
Real Machine(s) Bytes Packets Connections
10.13.1.82:25:0:tcp 0 0 0
10.13.1.140:25:0:tcp 0 0 0
10.13.1.140:80:0:tcp 30264258 40427 479
10.13.1.82:80:0:tcp 23270854 31703 479
10.13.1.83:80:0:tcp 13703047 20270 480
10.13.1.83:25:0:tcp 0 0 0
Virtual Machine(s) Bytes Packets Connections
10.13.1.101:80:0:tcp 67238159 92400 1438
10.13.1.101:25:0:tcp 0 0 0
My application uses HTTP 1.1.
My application is sending 30 concurrent requests, which is consistent
with the 'show real' output (1+27+2 = 30).
When I check the web server logs of the real servers, one machine always
gets the majority of web requests. The local director does not seem
to employ true round robin as the load increases. This load is *not*
stressing the LD (cpu is 2%).
Here is the config:
localdirector# sh conf
: Saved
: LocalDirector 416 Version 4.2.5
: Uptime is 0 weeks, 0 days, 0 hours, 9 minutes, 40 seconds
syslog output 20.3
no syslog console
enable password 308271e7e208466462cb5a1dafc6a5 encrypted
hostname localdirector
no shutdown ethernet 0
no shutdown ethernet 1
no shutdown ethernet 2
interface ethernet 0 10baset
interface ethernet 1 auto
interface ethernet 2 auto
mtu 0 1500
mtu 1 1500
mtu 2 1500
multiring all
no secure 0
no secure 1
no secure 2
ping-allow 0
ping-allow 1
ping-allow 2
ip address 10.13.1.117 255.255.255.0
route 0.0.0.0 0.0.0.0 10.13.1.129 1
arp timeout 30
arp retries 15 4
arp gratuitous 60
no rip passive
rip version 1
failover ip address 0.0.0.0
no failover
failover hellotime 30
password 18c422e309cddd9efda0bde0fb08dcb5 encrypted
telnet 10.13.1.204 255.255.255.0
no snmp-server enable traps
snmp-server community public
no snmp-server contact
no snmp-server location
virtual 10.13.1.101:80:0:tcp is
virtual 10.13.1.101:25:0:tcp is
predictor 10.13.1.101:80:0:tcp roundrobin
predictor 10.13.1.101:25:0:tcp roundrobin
real 10.13.1.82:25:0:tcp is
real 10.13.1.140:25:0:tcp is
real 10.13.1.140:80:0:tcp is
real 10.13.1.82:80:0:tcp is
real 10.13.1.83:80:0:tcp is
real 10.13.1.83:25:0:tcp is
replicate interface 2
bind 10.13.1.101:80:0:tcp 10.13.1.140:80:0:tcp
bind 10.13.1.101:80:0:tcp 10.13.1.82:80:0:tcp
bind 10.13.1.101:80:0:tcp 10.13.1.83:80:0:tcp
bind 10.13.1.101:25:0:tcp 10.13.1.140:25:0:tcp
bind 10.13.1.101:25:0:tcp 10.13.1.82:25:0:tcp
bind 10.13.1.101:25:0:tcp 10.13.1.83:25:0:tcp
localdirector#
Can anyone shed some light on what might be happening?
Thanks,
Misk