Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Exchange 2007 in a blade environment 1

Status
Not open for further replies.

hcrider

IS-IT--Management
Apr 19, 2006
33
0
0
US
We are currently looking into going into a blade environment. One of the issues that we have bumped into concerns running Exchange 2007 in a blade environment. We have talked with several vendors and the answers vary. Some say it will have production issues and must be run on a server outside of the blade structure, others say it should be no problem. I'd like some thoughts on this if anyone has this running in their blade environment. Also, I'd like to know which platform and hardware that your are using.
 
It depends.....

Generally, mailbox servers should have raw access to the storage. Introducing some middle piece can cause latency and other issues. So keep that in mind.

CAS, HT, and ET roles generally do fine in virtualization, and should work fine in a blade environment, all things being equal.

Pat Richard MVP
Plan for performance, and capacity takes care of itself. Plan for capacity, and suffer poor performance.
 
58 sniper,

Even with the fiber connection? And is it hardware dependent? We're looking at IBM, Dell and HP blade environments. Ohh, I forgot to add the SAN that we want to use as well.
 
YMMV, really. The big recommendation is towards DAS for Exchange. This is for performance.

You may be able to get it to work. But you have to take everything into consideration when developing the storage configuration. Everything it goes through can impact your IOPS.

Pat Richard MVP
Plan for performance, and capacity takes care of itself. Plan for capacity, and suffer poor performance.
 
I've used the IBM and HP blades with SAN for Exchange 2007. I'vew done both FCP and iSCSI connectivity to the SAN. Properly designed it's a non-issue. I routinely see greater latency at the storport driver than I do over the FCP or iSCSI connection.

The whole DAS/SAN arguement is a funny thing. If you read the MSIT whitepaper "Exchange Server 2007 Design and Architecture at Microsoft", you'll find the statement:

"With previous versions of Exchange Server, Microsoft IT relied on SAN solutions to provide the necessary configuration for its mailbox clusters. SAN provided a higher level of availability due to the architecture, and enabled Microsoft IT to achieve the number of disks required for I/O throughput and scalability. Mailbox servers clustered by using Windows Clustering and SAN-based storage enabled Microsoft IT to achieve 99.99 percent availability with Exchange Server 2003, yet the shared storage solution was a single point of failure that was expensive and required specialized skills to optimize and maintain the configuration. Additionally, the mailbox databases on disks remained single points of failure."

A one time bad experiece with tech phone support in the middle of the night from a single SAN vendor which caused a chain of event akin to a Rube Goldberg contraption on steroids highlighting poor design and implementation has caused MSIT to make a scathing generalization about an entire industry.


The DAS design at MSIT used a building block of 6000 users. For 6000 users, the MSIT mailbox server design uses 200 spindles. The power consumption alone when scaled to the entire organization is enough to cause rolling blackouts. Just the aquisition cost of the spindles exceeds the cost of most SAN designs that could deliver the same level of performance and greater reliability. The power and cooling costs of the DAS design in 18 months will likely exceed the aquisition cost of the storage. That's sad. What they could save on operating costs in a month by going SAN would more than pay for the training required to obtain the "specialized skills" needed to optimize and maintain the configuration. What MSIT put in this paper goes counter to the trends of the enire storage industry, and the goals of the people running their own datacenters. Have you seen

SAN isn't the answer for everyone. I'm absolutely certain there are many more 100 seat Exchange 2007 designs than there are 10,000 seat designs. For those small designs, DAS is a highly viable and inexpensive solution. The point is that DAS doesn't scale well. As the mailbox count and storage requirements increase, there comes a point where SAN makes a lot more sense than DAS. Pushing a DAS for every instance philosophy is doing the customer base, and their own datacenter architects and managers, a disservice.
 
Especially with the advent of iSCSI, SANs are starting to make more sense even on smaller scales. Even those 100 seat Exchange shops probably have several other servers in the rack, and dropping $1500 on 5 or 6 speedy SAS disks every time you replace a server helps justify a SAN investment pretty quickly. Factor in increased power, cooling and UPS requirements and virtualization starts making sense on a smaller scale as well.
 
Anyone who is telling you not to use a blade infrastructure is on crack, its cheaper, more easily managed and will have a lower TCO if you do the figures over 1yr against anything. About the only thing you will run into is external connectivity but having said that Fibre interfaces or ISCSI interfaces are pretty cheap in blades if you go with the cisco mds kit.

I'd stay away from virtualisation as its still too new, microsoft exchange mvp will say yeah its fine and then mutter under their breath in dev/test scenarios.

I recently done a pretty big investigation into different options and i found that HP blades connected to hp 2012fc is the cheapest option atm and because you have fabric switching in the middle of all this you can scale the system with 'storage building blocks'. Want to double mailbox sizes in the future just double the amount of 2012fc arrays you have.

Having said that if you have a heap of SAN disk laying around doing nothing, use it but it will probibly end up costing more from a power perspective long term againsts sas disk.

Simple, clean and extremly cost effective.
 
I've had pretty good success virtualizing CAS/HT/ET roles on properly sized equipment. Blades, too.

I'm still not a fan of virtualization, but it does work. Too many people put multiple CAS/HT servers on the same VM host and then think that's a good resiliency solution. But when an array/proc/RAM/NIC fails, it takes down everything. Sure, there are failover options for this, but you get my point. But I do have some environments where thousands of users are connecting to a single virtualized CT/HT server without issue.

Mailbox servers are another story, especially in high activity environments. Exchange likes raw disk access, so putting it on some VHDs isn't the best solution. When you have a large IOPS Exchange environment, and then toss on some journalling based archiving solution, Blackberries, and anything else that sends those IOPS even higher, we need to squeeze more and more out of our storage and the related/connected pieces. And that's often harder to do when their is some virtualization or middleware piece involved. Just my .02. Of course, I defer to XMSRE, our resident storage guru.

Pat Richard MVP
Plan for performance, and capacity takes care of itself. Plan for capacity, and suffer poor performance.
 
Well, we are going to a SAN/Blade environment regardless of which way we go. AS noted by xmsre, Microsoft isn't exactly big on the idea. Actually, only IBM has come out and said it will work and I have spoken to someone who has it running in a much bigger environment (we have about 150-200 users).

theravager, DEll and HP were a little skeptical, so I will have to look into that solution you proposed. Can you give me more info and have you implemented it or just looked into it?

We have to stay away from virtualization for the moment, as many of our 3rd party applications aren't up to that level yet.
 
IBM should be able to supply you with plenty of customer references. I've worked on several project with them. I've done quite a few projects with HP blades as well, and I'm not sure why they're being skeptical. I think it depends on who you talk to. HP does have plenty of public whitepapers on the subject.



Now Dell, aren't they the ones that sold all that disk to MS?
 
jpm121

Valid point. Exchange isn't the only thing in the data center. When you look at consolidation options (blades, SAN, virtualization, etc.), you need to look at the whole picture.

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top