Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Building Out New Server Racks

Status
Not open for further replies.

CoreyWilson

IS-IT--Management
Feb 3, 2004
185
0
0
CA
Has anyone ever seen a noticeable impact on leaving 1U between servers when building up a new rack? I guess under the premises of allowing some room for airflow and increased heat dissipation? Obviously as a rack becomes full you will start filling in the spaces as space and server dimensions dictate. I wonder for appearance if it would like a bit more tidey evenly distributing servers throughout the entire rack, except for the fact that you are juggling multiple length cables for RILO cards, SAN's, Network, etc.

Im am just curious how others approach starting a new rack with between 5-10 servers to start. Do you build starting from the middle (assuming a slide out kvm tray) filling the space from the middle up? from the middle down? up or down? both?

What have you found to be most convenient? Obviously placing UPS equipment at the bottom of the rack eliminates any kind of top heavy concerns, excluding the fact that weight is evenly distributed from fore to aft in most rack mount equipment anyways these days the ideological mentality that heaviest equipment should always be mounted lowest more or less comes down to ease of man handling the equipment into the racks.

So how do others approach a new build out.

Thanks
 
Hi Corey

Is this a multi-vendor build, or limited to a single provider? Who makes the racks?

I ask because my experience is mainly limited to IBM equipment (I have worked with Dell, Compaq, and HP also), and they do provide some nice (IMHO) rack-friendly kit that has improved enormously whilst I've been working in IT.

Cheers

Matthew

Kind Regards,
Matthew Bourne
"Find a job you love and never do a day's work in your life.
 
Actually we are a Dell shop now, have been migrating away from IBM. With that being said I have worked for two other companies in the past three years, one was solely HP/Compaq (250+ servers, dl360G1/G2, DL380 G1/G2/G3, DL580 G1/G2, EVA 3000 and 5000(i think), BL20P and BL40P blades) and one solely IBM (336, 346, 366, HS20 and HS40 blades with DS4xxx series SANs (150 or so servers no older then 3 years)). So I have worked with the three major intel server vendors. I posted in the IBM forum given the increased traffic flow, but I am talking about pretty much any standard 42U server cabinet released in the last couple of years. I have learned some valuable lessons and was just curious as to how others approached such projects.

Thanks
 
We've got some racks with IBM xSeries 330's (dubbed 'pizza-boxes' in the organisation) stacked right onto each other (no spaces in between).

At some point we've had to remove the front door because the servers turned themselves off frequently. Apparently it's because of the temperature. With the door removed, they stopped doing that. The server room is being kept at a constant (cool) temperature but the airflow just won't cut it.. so looking back at it, I'd recommend 1U in between as well.
 
Hi Corey

Well, with all that in mind ...

I tend to plan to put SAN storage at the bottom, because those disk drawers tend to be v. heavy. If I'm just installing the one drawer, I try to reserve enough space directly above to accommodate at least 1 expansion drawer. As always though, this has to be an informed (ahem) guess :)

Next comes the heaviest (largest) servers, working up to the 1U boxes at the top.

SAN switches tend to go above all the servers.

I recently installed overhead cable trays to flood-wire a small suite back to a cabinet-installed chassis-based switch, so the overall rack layout looks something like this:

PATCH PANEL
SAN SWITCHES
PIZZA BOXES
LARGER SERVERS
STORAGE

If you're familiar with the IBM enterprise racks you'll know what I mean by the rear rack space - I use this to create vertical cable looms with SAN&LAN together, power together, and KVM together.

If possible I like to "flood-wire" the racks with rack-mounted PDU units (again, the IBM enterprise racks allow for vertical installation in side bays, therefore avoiding any loss of the precious 42U server space) and as many LCM switches as required to provide capacity for the entire rack - 1 switch can accommodate up to 16 servers, and the cabling is much less bulky than the traditional KVM because it is mainly CAT5.

I have to say I've not yet experienced the problem with airflow, even with some fairly dense rack populations. If you do leave gaps between servers, and I hope I'm not telling Granny how to suck her eggs here, make sure you blank the gaps with the appropriate plates for your racks - otherwise the servers will circulate their exhausts back to the front of the rack, and you'll end up pulling warm air into the kit.

HTH

PS - if you've any advice/experience that you're willing to share around keeping records of assets, physical configs, and changes within your data centres I'd love to hear it...



Kind Regards,
Matthew Bourne
"Find a job you love and never do a day's work in your life.
 
I usually go from top to bottom of rack when starting a new rack. Adding ups/kvm switches/switches at the bottom. We leave no spaces in the newer racks, on some of our older racks we did for cabling because there was not cable room. I have never see a air flow problem but that would be a question on how good your data center is with cooling.

Eric
Whirlpool Corp.
 
IBM has a tool called the Rack Configurator tool which is used to layout your rack based on specs you provide. Although it is IBM specific, I am sure you can use the info to simulate a similar scenario for your Dell equipment. Here is the link;

Hope this helps

Regards

Terry
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top