Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

cisco 4506 locked up ... suggestions?

Status
Not open for further replies.

fh497a

IS-IT--Management
Jan 31, 2012
19
CA
Hi all,

This has been happening once every 6-8 months.

What happens is I cannot ping anything on the switch (local gateway, other computers, routers, servers). I have to power cycle the switch to restore service.

Any suggestions .. I did upgrade the IOS a while back but it didn't help the cause. I am at a loss and don't know what could be the issue.

1. Should I be scheduling a reboot to be proactive?
 
It's more likely that it is not locking up, but that there is a duplicate IP address for the interface that you're trying to connect to.

To verify this, ping the switch when it is working. Now type
Code:
arp -a

Write down what Physical Address it says for your switch IP address. Repeat this process when the switch locks up again and verify that the Physical Address matches. If it does not match, that means another host is trying to use the switch IP address.

Also plug in a console cable into the switch before rebooting it and see if there is anything in the "show log".
 
just to be clear.

Everyone connected to this switch 150 user's cannot ping anything. Also devices from other location were not able to ping anything on this switch either.
 
When I do the above. I only see the ip address and physical address of my gateway vlan that is on the switch. Should that be the physical address I should be writing down? I don't see the ip address of the switch listed in the arp table.
 
As it's switch, IP addresses are irrelevant. Your hosts don't care what the IP address of the switch is, and if your switch had no IP address they wouldn't care either.

What might be interesting is a "show mac-address-table" on the switch.
Save a copy of it and when the lock-up occurs, compare them.

It is more likely, however, that you will get clues as to what's going on by doing a "show log" after it's locked up, by checking all your switchport utilisations during the lockup.
 
Ya .. I wish I had time to troubeshoot but it usually happens during the day and is our backbone switch to our network.

The switch is 6 years old and management is suggesting why not just replace it .. but not sure now if that would even resolve the issue.
 
It certainly would be relevant if it is his/her default gateway and trying to access other computers, servers, routers, etc.
 
looking at the table im seeing a handful of:

Are those all switches or could be uplink ports to routers/firewalls/AP's

4 0019.2f86.0abf static ip,ipx,assigned,other Switch

We have a few small switches 4-6 port ones around the building. Could this be part of the issue?

 
It's possible you could have a loop and have spanning tree disabled. You would have to go out of your way to disabled spanning tree, unless you set the ports connecting to those small 4-6 port switches with spanning-tree portfast. Portfast should only be enabled on host ports, not if you plan to plug in a switch/hub.
 
If your management are up for replacing it, why would you stand in their way?
Tell them they need a nice VSS pair of 6500s.... ;)

Anyway, from the sounds of things - when this happens, there's a panic and then you reboot the core switch and that means you've lost all the logs and presumably you don't have much monitoring going on.

For starters - get your switch logging to a syslog server. (Check out Kiwi Syslog or Splunk, maybe.)

You should also be using some sort of tool that continually collects SNMP stats off the switches (check out Solarwinds for example). In this case, you'll be very interested in reviewing all the switchport utilisation stats from the affected switch in the period leading up to the "lock-up". You should be suspecting some sort of flood as your first guess at what's going on - port utilisation stats will allow you to confirm/deny this guess and allow you to move on to the next step in identifying the problem.
 
We are looking to get a new switch and this is what we currently have
cisco 4506

Sup II+10GE 10GE (X2), 1000BaseX (SFP) WS-X4013+10GE JAE10328ME7
and 5 of these 48 10/100/1000BaseT (RJ45)V, Cisco/IEEE WS-X4548-GB-RJ45V JAE1115C3GE

I want an update to this but enough for the 5x48 ports ? any recommendations?

Would something like this make sense?

1. 2. with redundant power supplies?
 
or something like the below with 48port modules


what is the main difference between these 2 switches?

Just looking for something that meets our needs. We have about 100 users and this branch contains all the core infrastructure for 15 branches. We don't have a ton of traffic
bladecenter/san/vmware/nortel cs1000/fileserver data/sharepoint. about 30 servers.
routers and such.
 
If you are having issues then just replace the cat4500 supervisor with something newer and faster in the line if you are running an "E" chassis. If you have an non E chassis then you would want to replace the chassis too but you can still use your current linecards which would save a lot of money .
 
I could try doing that as well....

I do have the budget to replace the switch . then that way I can atleast have a full physical backup.
 
for the interim is it possible to schedule a reboot/reload of the switch every couple of months?
 
Are you 100% sure this is a hardware issue? I would not dare ask management to replace a piece of expensive network equipment and then have the problem persist.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top