Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Here I go again...drive config help 4

Status
Not open for further replies.

Davetoo

IS-IT--Management
Oct 30, 2002
4,498
US
Building another Exchange 2007 server..going through the spreadsheet to determine best drive config and getting some odd results. So...I'd like some real world scenarios from you if I could please.

Mailbox count is 450 with a 10% growth over 5 years. Usage per mailbox is fairly minimal...25 in/out max per day as an overall average.

I have an IBM 3550 with 3.5" drives using two 146GB RAID 1 for the OS. Going to use the other four slots for my RSG later on.

Connected to a DS3200 with 12 300GB drives.

IF you were going to configure the drives, how would you do it?

My consultant is saying build 11 RAID 5 with a HS...I think that's a huge mistake, thus my query here.

I'm Certifiable, not cert-ified.
It just means my answers are from experience, not a book.

There are no more PDC's! There are DC's with FSMO roles!
 
You didnt say anything about your log files so im assuming you where going to put them on the OS mirror or the raid 5 array, which you most likely wouldnt want to do.

I would put the log files on their on raid 1 mirror assuming you arent going to generate more than 300GB of logs if using the disk array or you could add more drives into the main server itself like 146B drives mirrored.

Also you didnt say anything about current mailbox sizes and are you planning or doing any sort of archiving which could have an effect on storage requirements.

I would look at maybe putting a mirror of 2 disk on the array if you dont want to put them on the server unit.

You also have to look at what your SG/DB strutuce is are you wanting to split them off to different LUNs instead of loading them all on to 1 Raid partition.

Someone else might have a different opinion on what you should do.

 
You're missing some critical information here, such as mailbox limit sizes. Without those, you can't properly size the storage without taking some serious WAGs. You also don't mention your HA model (if any). LCR would have a serious affect on the storage, and CCR would have a minor affect.

What's the DIRT window?

Taking some "default" values of 1GB mailboxes, and 14 days of DIRT, no HA, you'd still be looking at 6 databases (recommended each in their own SG). You could use 13 LUNs for that when you factor in the RSG.

Pat Richard MVP
Plan for performance, and capacity takes care of itself. Plan for capacity, and suffer poor performance.
 
Sorry guys...was just wanting a quick response as to how to configure the RAID for my twelve 300GB drives in the DASD.

I'll sharpen my pencil and go through the idiotic MS routine.

Thanks.

I'm Certifiable, not cert-ified.
It just means my answers are from experience, not a book.

There are no more PDC's! There are DC's with FSMO roles!
 
The rule of thumb is: If the write penalty of your proposed RAID type is higher than the read/write ratio of your application, then your proposed RAID type is not appropriate for your application.

Exchange 2007 using cached mode clients has a read/write ratio close to 1:1. RAID 5 has a write penalty of 4. RAID 5 is not an appropriate RAID type for Exchange 2007 with cached mode clients.

Why exactly is that?

<short answer>

You'll use twice as many spindles to reach a given level of performance as you would with RAID 10.

</short answer>

<long answer>

Lets pick an arbitrary performance number, say 1000 DB IOPS. With a read/write ratio close to 1:1, that means about half will be read to the database and half will be write to the database. In addition, writes to the logs are about the same as writes to the DB. So, I have:

500 database reads
500 database writes
500 log writes

Of course this is real basic and without room for growth, allowance for backup IO, etc. but,

Let's tale an imaginary spindle that has a nice round 150 random 8k IOPS/sec at a 20ms response time.

Further, let P be the performance of a single spindle and N be the number of spindles in an array.

RAID 5 write performance = P*(N-1)/4
RAID 5 read performance = P*(N-1)

RAID 10 write performance = P*N/2
RAID 10 read performance = P*N

with P = 150 and N = 12,

RAID 5 write performance = 412.5
RAID 5 read performance = 1650

RAID 10 write performance = 900
RAID 10 read performance = 1800

Notice that with 12 spindles RAID 5 cannot possibly meet the DB write requirement of 500 IOPS (if all the IO were wites, you'd fall short 87.5 IOPS). You'll need a lot more spindles once you calculate in the read/write ratio and the overall requirement (writes were only half of the ADB IO). Just to meet 500 reads and 500 writes per sec to the DBs you need 19 spindles and on top of that another 15 spindles to handle the log write IO. Total spindle count is upwards of 35 spindles.

To be fair, with RAID 10 I need 18 or so spindles for the combined DB and log IO, however the original 12 would just about get the DB IO requirement. Still, 18 is just over half of 35.

</long answer>

Seems to me, you should be looking for another consultant. The advice given to you would simply result in selling you more disks, and enclosures, and racks, and ....









 
Thank you xmsre! Ironically...I went through the whole spreadsheet for the planner and it came up with 12 drives.

So, I'm going to configure four of them RAID 10 for the DB's, four for the TL's, three RAID 5 for the RSG and a hot spare for the eleven. That leaves me the four bays on my server to handle the LCR task.

I'm Certifiable, not cert-ified.
It just means my answers are from experience, not a book.

There are no more PDC's! There are DC's with FSMO roles!
 
Now the RSG, that's one place where RAID 0 actually makes sense. Think about it; you use RAID to protect live data. An RSG is simply scratch space; it's not live data. You restore there long enough to do a repair or export the mail somewhere else. It's not a location that would contain live data. I'd just put the 3 drives in RAID 0 for the RSG. You'll get more space and better performance. You'll need that if you run isnteg or eseutil on a DB there, or even if you export mailboxes.

 
Duh..yeah, sorry. I'm just so used to RAID 5 I can't get it out of my head! RAID 0 for the RSG drives.

I'm Certifiable, not cert-ified.
It just means my answers are from experience, not a book.

There are no more PDC's! There are DC's with FSMO roles!
 
What kind of backups will you be doing; Backup API or Snapshots?

Exchange 2007 uses an 8K IO size, so your allocation unit size should be at least 8K. If you are using backup API streaming backups, they generally read in 64k chunks while doing the backup. You can potentially increase your backup speed by making them closer to 64K. If you use snapshots, then either eseutil or chksgfiles is used to verify the backup. These also read in 64K chunks. You can potentially increase your backup verification speeds by using 64K.

On the other side of the coin, you'll see more filesytem cache usage the larger you go, and that's really wasted. If memory is tight on the box, you'll probably want to go for a happy medium like 32K.

How much cache do you have on the box? I'm guessing the ms caclulator spit out a relatively low number, but I would want at least 8GB anyway. Remember that Exchange isn't likely the only thing on the host. You have backup applications, AV, console sessions, monitoring software, other third party software, etc.

While you're at it, take a read of This is really been the hot issue lately. Most important is the kernal update kb954337 and making sure your drivers are up to date. If your working set gets trimmed, it's like constantly being in cold start; you lose that 70% IO improvement over Exchange 2003 due to larger DB cache. If your AV uses memory mapped IO, then you'll want to limit the size of the system cache so that a system wide trim does not trigger. I have a new experimental utility that's a bit better than using cacheset if you need to go there.

one last one; if your host has dual core AMD opteron processors, add the /usepmtimer switch to the boot.ini. You'll save yourself a lot of grief. See The screenshots are from a real customer situation that I sent to Mike. The times appear to keep increasing; in reality it's just the processor timer that's off. It's an AMD processor bug. Talk about hard to troubleshoot...
 
Sorry xmsre, I'd asked Dave to remove my last three posts. I'd found and was going through it to align and didn't hit "Back" to see that they said to use 64K allocation unit sizes. I have 12GB of RAM, so I'm good there. I'll look over the blog here in a minute while my drives are formatting. Using Intel quad chips.

I've setup the arrays...four 300GB drives RAID 1/0 (LUN 1) for storage groups, four 300GB drives RAID 1/0 (LUN 2) for TL's and three 300GB drives RAID 0 (LUN 3) for RSG. Going to put four 300 GB drives on the server, RAID 1/0, for my LCR.

That gives me 560+GB for the stores, which if I limit the majority of my 450 users to 1GB will be just fine.

I'm still going over all this though, because the excel spreadsheet confige thing is confusing me. It recomends 8 drives RAID 1/0 for my SG LUN and 2 RAID 1/0 for my TL LUN...but you have to have four drives minimum for RAID 1/0, so now I'm wondering if I've done something wrong here.

I'm Certifiable, not cert-ified.
It just means my answers are from experience, not a book.

There are no more PDC's! There are DC's with FSMO roles!
 
Ok, found my mistake. In the calculator you have to tell it the RAID drive config (i.e. 1+1, 2+2, etc.). The odd thing is that 1+1 is not a RAID 1/0 option, but it's the default...anyway, changed that to 2+2. Now it recommends 8 drives for the SG's, 4 for the TL's and one for the RSG.

The LUN thing is confusing the heck out of me because on the LUN requirements page it says I need 7 LUN's...but the LUN's are setup on my DS3200 and I have three of them...one for SG, TL and RSG. So I'm gonna make a phone call to a buddy of time and have a transfer-of-knowledge real quick on LUN's.

Thanks.

I'm Certifiable, not cert-ified.
It just means my answers are from experience, not a book.

There are no more PDC's! There are DC's with FSMO roles!
 
Ok, understand the LUN's now a bit better. But it's let me down the path to a different issue.

The calculator advised me to have seven DB's on their own LUN, 63 users/DB. Ok. So I've already created my RAID 1/0's, so now I need to partition that into my seven LUN's for my databases.

But, the partition design discussed here, says to use diskpart to create the aligned partition. However, if I do that my entire empty drive of all 8 drives in my RAID 1/0 then become a single partition. I can't do that...I have to have seven partitions for my DB's.

What am I missing?

I'm Certifiable, not cert-ified.
It just means my answers are from experience, not a book.

There are no more PDC's! There are DC's with FSMO roles!
 
The reason you align partitions is because the first 63 sectors are used for the MBR and the first partition would normally start on the 64th sector. Once you align the first partition, subsequent partitions on the LUN do not have this issue. As long as each partion size is a multiple of 32K, you're good to go.

 
Ok, that's what I was not understanding...I just need to align the first partition on the drive (the "drive" being the array that I'm working on). So I select the disk, then create a primary partition the proper size and use the align command, then I create an extended partition and then on that extended partition I create my other six logical drive partitions, all the same size as the primary partition I created.

I'm Certifiable, not cert-ified.
It just means my answers are from experience, not a book.

There are no more PDC's! There are DC's with FSMO roles!
 
Just a note if you are using 2008 server then you don't align the partitions.
 
Correct. I'm on 2003.

One last question to everyone...is there any issue with putting the TL's and RSG on the same spindles? My second array of 580GB or so is way too large for my TL needs (calculator recommends 11GB TL's for each 135GB DB...I built 57GB TL partitions. So I'm going to redo those and drop the TL's down to 15GB for each DB and want to use the rest for my RSG if needed.

I'm Certifiable, not cert-ified.
It just means my answers are from experience, not a book.

There are no more PDC's! There are DC's with FSMO roles!
 
I wouldn't see an issue with that, since the RSG doesn't really get used much. But you want to make sure that your largest database can be restored there and you can still survive the amount of time you specified for failed backup tolerance.

Pat Richard MVP
Plan for performance, and capacity takes care of itself. Plan for capacity, and suffer poor performance.
 
Well...I'm an idiot. I just realized I do not have a hot spare drive...I've used all twelve slots on this stupid thing. So...kind of a moot issue I guess. I have to get another DS3200 so I'll populate in some 146GB drives and use the 300GB's for an RSG.

I just overlooked it after I redid the config with the correct number of drives.

My thanks to everyone that helped!


I'm Certifiable, not cert-ified.
It just means my answers are from experience, not a book.

There are no more PDC's! There are DC's with FSMO roles!
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top