Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

New to SAN, quick question 1

Status
Not open for further replies.

jpm121

MIS
May 6, 2002
778
0
0
US
We're bringing in pros to set up a new SAN for us because I'm 100% new to that world, but while I'm evaluating proposals, I had something I'm wondering about.

For an Exchange or SQL server, it's supposed to have better performance separating the log files from the data from the OS and program files -- on a single server this is done by separate spindles (RAID 1, 5, 10, whatever) for each function.

How does this work in a SAN environment, where we have an enclosure with 12 SAS drives? Does the SAN manage this on its own, or is it no longer a concern because of the logic built into the storage?
 
On a san it is just the same actually. The disks are most of the times bundled in some kind of raid form, on which you then create volumes/luns to export.In some cases disks are bundled in multiple raid 6 groups, and a logical layer is put ontop of that. This way you have like 1 big pool of storage, on which again you can define luns/ volumes and export to your hosts. As a central storage has a large amount of cache on its controllers, you write to the cache first, and afterwards it is being committed to disk. Read ahead algorithms will boost performance when doing lots of reads. So it means that you should follow the best practices of your storage vendor on how to lay out your volumes, and for the rest let the box handle it.
Hope this clarifies sth :)

rgds,

R.
 
Thanks RMG, that clarified it quite a bit.
 
We create multiple RAID 5 sets on our SAN (preferably each dedicated to a host but more often than not we end up having multiple hosts sharing the same set of disks). I was a bit worried at first this would have a major impact on I/O performance but we've not seen that yet (mostly due to the sophisticated caching on the controllers).

We did have a separate RAID 1 group for our Exchange logs but it ended up wasting so much space we ended up putting another hosts transaction logs on a separate LUN in the same RAID group which made everything a bit pointless really as you lose the sequential write performance.

Some SANs it appears you do just create one big RAID group and then add the LUNs you need when you connect hosts but to me you lose a lot more control of performance (but you gain simplicity and better utilisation of available space).
 
Nick, we're a pretty small shop but we're getting into some more advanced areas (for us anyhow). We'll be adding a SQL database server, and probably adding Exchange and some other stuff next year. Since the money is in the budget for the SAN we're going for it, even though we're probably considered "too small" to really need one.

For Exchange and SQL I'm thinking keep the logs on a local RAID1 volume and just store the database/stores on the SAN, along with all the user data and some other application databases we use. We're talking less than 100 mailboxes in the message store so I don't think we need a finely tuned structure other than separating out the log files.

Part of this is also getting into some virtualizing next year, and a SAN seems like an important building block for that.
 
If you want performance, you need to get as much spindles as possible. And for exchange and SQL server, it's mostly the logs that are written to. So don't go and put the transaction logs onto a raid 1 ...
 
What would you suggest for storing the logs?
 
If you only have 12 disks, and the storage box allows raid 6, I'd go for that.1 big array raid6 with all disks ( leavin at least 1 spare of course ).When you then provision your luns, just make sure to create separate luns for logs and separate luns for DB files. You should be OK normally then. This is the best practice for most storage vendors.

rgds,

R.
 
Normally, RAID 1 (or 10) is recommended for sequential writes for log files. RAID 5 (or 6) is not recommended due to the parity overhead. But that's based on DAS, right?

So is the idea that since you're feeding all the data into the SAN processing hardware the performance gain (beefy storage controllers + 12 spindles) more than makes up for the inherent overhead of RAID 5/6?
 
Write cache on the controller of many SANs out there can effectively negate the once IO of the 4 IO RAID 5 write penalty. Going further than that is a problem. As the working set size increases as a percentage of the DB size, caching schemes quickly become ineffective. If you look at a controller like a Clariion or an EVA, you'll see that actually only a very small percentage of cache is dedicated to write cache. This is exactly because of the futility of increasing write cache with this type of workload.

When you contrast this to the RAID 10, once an IO hits the write cache and is persisted to at least one spindle, we're good; the write has been committed to disk. On the backend, it still takes two IOs for every write, but our write penalty is still no worse than 2. The best you can get with RAID 5 for a system that overwrites blocks in place is a write penalty of 3.

For SQL, take a hard look at your IO workload. What is the average size of IO? do you do mainly read ahead and do bulk writes, or is it an OLTP application with a lot of singletons? The answer will determine if RAID 5 would be appropriate.

For Exchange, Exchange 2003 with cached outlook clients has a read/write ratio of 2:1. Exchange 2007 has a read/write ration approaching 1:1 thanks in large part to the increased host cache. The write penalty of RAID 5 is exceeding the read/write ratio in all cases. It's inappropriate for this workload. RAID 10 matches the read/write ratio for Exchange 2003 and is pushed somewhat beyond the limit for Exchange 2007. At MSIT, they intentionally reduced the RAM in Exchange 2007 servers which increased the read/write ratio closer to 2:1. The storage design was all RAID 10. Take a look at the paper "Exchange Server 2007 Design and Architecture at Microsoft" for more detail.

I'll stick by the rule of thumb any day: If the write penalty exceeds the read/write ratio, then that RAID type is inapproprite for the application workload.

XMSRE
 
For a 100 user mailbox Exchange server there's absolutely no reason to worry about performance when using RAID 1 for transaction logs. We run 300-400 user mailbox Exchange servers on single 5 disk RAID 5 arrays (so the O/S, Db and log volume are all on the same disks) without any performance issues.

Dell's Exchange server sizing tool is hilarious in this respect - it ends up recommending something like a £20k solution then a £5k one would be fine but ofc it's in their interests to do that.

If you have fairly low end requirements (and it sounds like you do) I'd go with something like a Dell MD3000i SAN. I've just set one up for a client project and have ESX servers connected to it supporting around 140 users (running various VMs inc. Exchange) and it's doing fine. Internally we use multiple Dell|EMC Clariion SANs but they would be overkill for a small requirement.
 
I always got a charge out of Microsoft's recommendations about this, that, and the other for hardware configs (most of which don't discuss the lower-end deployment scales), then their VARs and OEMs go out and sell Small Business Server (Exchange, SQL, IIS, DNS, AD, and whatever else) all running in a 3 disk RAID 5 array.

Nick, can you tell me how many ESX servers and how many guest VMs your client is running? Also would be interested in generic hardware configs for the servers if you don't mind? Just trying to get an idea for budget reasons for next year.
 
Hi,

Before considering SAN environment you need to workaround total data size, frequency of change in data, backup strategies.

To implement SAN you may need to do hardware changes in existing servers. Servers shuold be having Fiber card (HBAs) to support data transfer between SAN switches, SAN storage and servers at same speed.

Proper humidity, temperature should be maintained in server room.

If your are looking for mid-range SAN then go for IBM DS4800-84A with 300GB FC or 500GB SATA hard disks.
Feature licenses are need to be purchase to use DS4800 features effectively.

Thanks
 
We're talking less than 100 mailboxes in the message store so I don't think we need a finely tuned structure other than separating out the log files."

There's a acronym that I like to use when it comes to SAN design that has been very successful for me - KISS. Keep It Simple, Stupid. In addition to that, Occam's Razor offers another great piece of advice - "All other things being equal, the simplest solution is the best." These two bits of wisdom have served me well in my years working on SAN/Storage/Unix/Veritas.

I offer those two items due to my opinion that everyone is making this far too complicated than it needs to be. There is great advice being given, but for your particular case I think simplicity is the best advice. You are working on only 100 mailboxes and the storage array has only 12 spindles. So performance shouldn't be the primary concern. In a 10,000 mailbox environment, performance followed by fault tolerance are the primary concerns. In any discussion about performance, RAID 6 should never be considered. It is by far the slowest performing RAID configuration due to the double hit of dual parity. In 99% of environments, a medium to large RAID 5 set will be more than sufficient.

I've used RAID 5 exclusively through Exchange, SQL, Oracle, File Servers, and Imagery Systems with little to no issues because the overall LUN design and system configurations were done correctly up front. These environments have been on both small and large arrays, from a NetApp FAS3020c to a Clariion CX500/700 to an HDS USP.

The biggest concern to worry about with Exchange and SQL is where the data and logs live. Logs should live on separate LUN's and more importantly on different RAID groups. With only 12 spindles, you could do something like a 4+1 and 5+1 RAID 5 sets with a single spare. Put your logs in the 5+1 and data in the 4+1 (assuming the groups have the appropriate amount of space).

There are many more concerns with getting into the SAN/Storage realm (array makes/models, switches, HBA's, drive types/speeds, system OS', fan in/fan out, queue depth, etc) than just RAID type. But without knowing more about what you're trying to do and with what sytems, I can only talk high-level.

Hope this helps.


------------------------------------------------------------------------------------------------------------------
"Only two things are infinite, the universe and human stupidity, and I'm not sure about the former."
Albert Einstein
 
@jpm121
We use Dell PowerEdge 2950's as our standard ESX 'building blocks' 2 x quad-core 3GHz CPUs and 16GB RAM. With either fibre HBA's or quad port NICs depending on if it's iSCSI or FC attached to a SAN.

For the client there's currently only 2 ESX hosts servers + 1 NetBackup/VCB proxy server attached to the MD3000i and 10 VMs (it would all run fine on one server but you obviously need 2 for redundancy).

Internally we aim for about 10-15 VMs per ESX host and are finding the amount of physical RAM is the main limiting factor (rather than CPU or I/O) so will probably bump the servers up to 32GB rather than add more ESX hosts to the farm.
 
Nick, thanks very much. I just ordered an HP DL360 (fully ESXi compliant hardware, checked everything) with 2 x 2.5 GHz quad core Xeons, 12 GB RAM (plenty of expansion room). This server was originally going to run just a SQL server and a Terminal Server in production, but I'm going to add a VM for our point of sale app and maybe a couple other server roles that we've loaded into one box over the years. Next year we can virtualize the rest onto a second box and have the desired redundancy with 2 servers.

I had a sort of gut feeling about what could run on a single machine under ESX but it's good to know we'll have plenty of headroom. Thanks for taking the time to respond!
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top