Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

simple design question

Status
Not open for further replies.

spivy66

MIS
Nov 8, 2002
150
US
Guys,

I have 9 L2 2960x flexstack and one L3 3560. I'M curious how i would design this network with as much redundancy as i can ?

Lest say i have 8 Vlans ,I figure setting 2/3 ports ether-channel/vlan trunk from L3 to first L2 switch in stack. Then for redundancy god for bid any one of my switches went bad or even flex cable ,setup another ether-channel for each switch back to the l3 switch or just connect cable without ether-channel and let STP do it's job, should be in blocking mode until other link dies ?

My real serious question is my I only have one L3 switch , so if that goes I'm screwed, so if i bought another L3 switch one how would i make it redundant for the other access switches to the net?, do i need hsrp/vrrp because on i only have ip base (SMI) not EMI, so that would not work, plus the isp router only has one connecting? Any hep would be great ,thanks guys

SO HERE MY SETUP I'm looking to design..

NET <----isp router----->L3intervlan------>9 L2 flex stack 8 switches


-dan, CCNA
 
The cheap way would be HSRP between the two switches for each SVI. Make sure your layer 2 switches have a path to both layer 3 switches.
 
Hi,
==>I figure setting 2/3 ports ether-channel/vlan trunk from L3 to first L2 switch in stack.

Use 2/3 ports or more from different switches in your stack for your etherchannel. So switch 1 has one link, switch 2 has one link...ect. There is no point in creating a secondary etherchannel for redundancy when you can take those ports and add them into the original etherchannel.


==>plus the isp router only has one connecting?

If your ISP only has one connection and you cannot get another ISP connection then you are forced to have a single point of failure at some point. You could get another 3560 and configure HSRP for internal redundancy but if the switch connecting to the ISP fails you would have to physically move the uplink to the ISP to the redundant switch.

Out of curiosity, is the ISP managing your networks firewall on their router?
 
Guys,
Thanks for getting back to me. ok so.. that makes sense because if one port fails on any switch on that channel since the L2 switches are stacked then the traffic can find it's way to the l3 switch though any channel on a different switch.

As for the ISP and L3 switch connection, and this is where i'm confused how hsrp work..

Well the problem is the budget and lack of resources. i will have most of my servers plugged into the L3 switch and a few in a stand alone L2 switch which is on a different channel to the l3 switch. I guess i cannot be fully be redundant because all my server ports have one nic to prod nw and the other to my iscsi nw switch (which is on a totally different switch).

I don't know if it makes sense to get two L3 switches because of the current setup i have. But lets say I had two prod nics for each server I would need to connect each one to the l3 switch and then hsrp would make sense here? Do i have this right? so if i lost one l3 switch the other would come off stand by? Also would have have the exact same config on both L3 switches? I hope this makes sense to you guys?


Right now I'm managing the firewall. i have separate pvlan in a l2 switch for public interface /27 mask( but when i get my l3 distro switch i will have it plugged into their on a separate vlan and use the l2 switch as an access switch.

Again thanks for your input..
 
Tphethai,

I'm not hat big of a shop. i'll have 10 access switches and 1 l3 distribution switch. All my servers will be in the distro switch. Don't you think its over kill to have a core switch with < 350 nodes. But yes i would need to have all my vlans & access lists on the distro switch even tho it's not recommend
 
For redundancy in the core, you just get two 3750s (instead of your 3560) and stack them.
Call them your "core".
You do not need to waste time and money on any "Distribution"
You configure your etherchannels on the core like this"
Core G1/0/49 <----> G1/0/1 Access
Core G2/0/49 <----> G2/0/1 Access
Core G1/0/50 <----> G3/0/1 Access
Core G2/0/50 <----> G4/0/1 Access

So any single switch failure, either Core or Access doesn't break the link between the two stacks.

To save money, use copper links instead of fibre. You can get copper cables with built-in SFP ends on them for almost nothing these days.

You then need to worry about redundancy FOr the servers.
Get a 2nd NIC for each Server and patch each Server into both Core switches
You can either configure them (on the switch and on the server) as a LACP trunk, or, rely on the Server to bond them together and manage them as an active/standby pair.

This is far simpler than frigging around with HSRP, which isn't very good for reasons of performance, efficiency, and support.
 
VinceWhirlwind,

thanks for your reply ,The problem is i only have budget for one 3560 , should i get one 3750 instead and down the line when i can get another 3750 and stack them? I know there about $1500 difference but i could pro swing it. I'm not a big shop and really don't want to wast money. Where talking about 350 nodes tops, in HQ's. all other connections to offices are VPNS. Cant I still have redundancy using two 3560 or even one 3560 and one 2960 and let RSTP do it's job. I know you said don't waste time on distro but even if i get the 3750 core its still going to be the main switch out to the net and all my acls will be on this switch using inter vlans.
 
UIltimately, a Cisco switch with a simple config on it will sit there humming away for years without ever giving you any problems, so redundancy in the core as I have described, although it's excellent, you can do without it.

You might want to keep your options open by making it a stackable 3750 for future projects. Down the track, if they want Server redundancy, having two non-stacked switches in the core will make life a bit harder.

And as I said, you can probably use direct access copper cables between your Core and Access stack to save money buying SFPs.

Be very careful using the same "Core" switch for LAN & DMZ, even if it's on different VLANs. Use different coloured patch leads for DMZ (RED or BLACK) and label all cables and switchports very clearly. Print out a graphic switchport map of the shared switch and stick it inside the door of the cabinet or similar.
 
Yea, I have db's on all my networking equipment/servers. I have all my cables colored coded and a diagram in my NOC explaing everything, but thank you for looking out for my best interest.
 
Guys,

So i think I'm stating to understand more about proper network design.

So if i have this right.So basically it's recommend to only plug access switches ( workstations and servers) into distro switches and distro switches into core switches for reduncancy?. or if you have a small network access switches into core.

And correct me if i'm wrong but it's not recommend plugin servers machines into L3 switches ( distro/core switches) ?
 
Hi,
It all comes down to how much traffic you have going through your core and the hardware used for the core. If you get a lot of traffic keep your core focused on routing/throughput. If you don't then it is ok to plug your servers into your core. You basically want your core to route packets as fast as possible but if 10-20 servers are not hampering it's routing ability then you can plug servers into it.
 
Stubnski,

Thanks for the reply. I figured you were going to say something like that. I just want to ensure i use good practice as my nw gets bigger.
 
If it's a small organisation, you often end up with a "core" that in fact includes data centre access.
Ideally you have access switches for the servers.
I'd be careful of using 3750s for a core if you have any kind of grunty servers in your data centre - I've heard lots of people complain about hard-to-identify performance issues which I understand to be caused by the 3750s' rather short buffers which can't cope with high volumes of big frames. So fine on the edge, but for back-end connections between a DB, file servers, and storage, I think you get problems.
 
I already created the design and plan on using 2960x in a flex stack
For all the servers and then trunk that to the core. Sounds good?
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top