Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chris Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

RAID Setup for Win 2003 File Server

Status
Not open for further replies.

spelk

IS-IT--Management
Oct 16, 2008
21
GB
We have a new server, A DELL poweredge 2900, that was bought with a RAID 5 setup, using 4 x 750Gb SATAu disks, upon booting the server its currently configured with Virtual Drive 0 as 1.7Tb and Virtual Drive 1 as 335Gb. Does this mean I can load the Win 2003 OS onto VD1 and use VD0 for the data involved with file sharing? Also, if one of the 750Gb drives dies, with this configuration can I still use a hot swap drive replacement? I'd appreciate any comments on the ideal RAID setup from others. Thanks.
 
For house keeping sake you would prefer the boot/OS drive be 0, but there should be no reason you cant load and boot from 1. However, since you have nothing loaded i would reconfigure the raid and make the smaller OS drive as 0 and data drive as 1. As far as the hot swap question, that depends on what your raid controller is capable of doing, since its new i would guess it can do hot swap, but i would read the docs to make sure.

just my 2 cents,



RoadKi11

"This apparent fear reaction is typical, rather than try to solve technical problems technically, policy solutions are often chosen." - Fred Cohen
 
Ok, thanks for the swift response, I'll reconfigure the Virtual Drive setup as you suggest, it sounds more logical than this default setup from the factory.

One of the reasons I ask this question, was because I'm new to all this, and I've had advice that you really should have your data only on a RAID 5 setup, and also have a RAID 1 mirrored setup for your Operating System. So that they're separate because if one of the RAID 5 disks goes bad and the OS is on it, you can't just swap in a replacement drive.

Seeing the partitions in the RAID Configuration, I was wondering if this setup with the two VD's will allow us the ability to recover from a drive failure, if the OS is on VD0 (from your suggestion) and the data is on VD1.

Its a PERC controller, with 8 drive bays, and the 4 disks are currently in bays 0 to 3.

Would it be wise to purchase two more 750Gb SATAu drives, insert them into bays 4 and 5 and setup a RAID 1 configuration on them to hold the OS?
 
Lots of ways to skin this cat, you ask this question to a 100 people and you will get 80 different answers. Not sure i would bother with adding any additional drives to create a raid 1 for the OS, probably a waste of money for your case. The whole point of raid 5 is that you can lose 1 disk in the array, swap it out and rebuild the array and not lose any data. With the size of your disks its going to take a long time to rebuild the array if you lose a disk, probably 8 hours or more. It really depends on how good your raid card is, you could if the card supports it, build the raid 5 with a hot spare. Meaning put disks 0, 1, 2 in the array and setup disk 4 as a hot spare. Now this will reduce your total array size to 1.5TB as in any raid 5 you lose the equivalent of 1 disk to parity. However if a drive fails the hot spare will stand in automatically and rebuild the array without any intervention. Or you could just keep all 4 disks in the array and swap out a failed drive, with all 4 disks in the array you still lose 1 disk to array for parity but it will increase your total capacity to 2.25TB minus formatting of course. Not sure i totally answered your question here, kind of rambling.



RoadKi11

"This apparent fear reaction is typical, rather than try to solve technical problems technically, policy solutions are often chosen." - Fred Cohen
 
Keep in mind the possibility of a drive failure during the rebuild process for RAID 5 on such a large drive is fairly large. I would not use RAID 5 with anything larger than 150GB drives. Go to RAID 6 or RAID 1/0 instead. You use half your drive capacity with RAID 1/0.

I'm Certifiable, not cert-ified.
It just means my answers are from experience, not a book.

There are no more PDC's! There are DC's with FSMO roles!
 
Would it be wise to purchase two more 750Gb SATAu drives, insert them into bays 4 and 5 and setup a RAID 1 configuration on them to hold the OS?"
10 years ago I just a raid 5 for the OS and data but due to lower disk prices and the greater safety of raid 1, I now only setup systems with raid 1 for the OS and raid 5 for data, mainly due to the complexities of rebuilding Active Directory DCs and servers with SQL instances. The last raid 5 only server I lost was due to a drive firmware flash issue, it was an FSMO, with Great Plains Dynamics..it took 38 hours straight to re-install even with a TB... I will never use a single raid 5 due to this.

750 Gig drives for raid 1 is way over board, even with global spares in mind, two 70 gig disks for the OS would be fine unless you have many users. The odds of two raid 1 disks physically dying within a short period of time (say a week ) is near astronomical (discounting power anomalies, overheating or firmware bug), raid 5 is not anywhere as safe. Rebuild time for large raid 5s can be >24 hours, rebuild is the most dangerous time for raid 5. Raid 1 disks, of the size I mentioned, <2 hours. Performance overall should be a bit better with a r1 and a r5, as two separate spindle sets allow data to and from the individual arrays at the same time. Raid 5 performance is hindered with the OS/data on one array as the pagefile/tmp files are writing/reading as data is accessed.
Another safety factor of raid 1, if you pull one of the raid 1 drives and replace it either with a cold spare or hotspare, the pulled drive becomes a cloned drive which can be used for an instant restore drive.

"if one of the RAID 5 disks goes bad and the OS is on it, you can't just swap in a replacement drive."
not true. If you do place the OS and the data on the raid 5, partition the raid 5, OS on one partition, data on another in case the OS gets total unusable due to corruption/virus/malware..the reinstall is easier. Placing the OS and data on the same partition results in disorganization, security issues. Also, if the OS is on a separate partition, and is totally destroyed, the data partition remains intact.



........................................
Chernobyl disaster..a must see pictorial
 
>raid 5 is not anywhere as safe. Rebuild time for large raid >5s can be >24 hours, rebuild is the most dangerous time for >raid 5. Raid 1 disks, of the size I mentioned, <2 ho

Amen to that. I had a busy weekend a couple of months ago due to a server losing two RAID 5 drives in the space of 2 hours (the rebuild stressed out the one of the remaining drives). Luckily the OS was on a separate raid 1 partition, if it wasn't it would have been a *lot* more work.

If you can afford it, then yes getting a RAID 1 drive for the OS isn't a bad idea (I generally try and stick with single platter drives for the OS).
 
Just to add to Davetoo's statement....

check out Intel's paper on raid 6, not that I am recommending raid 6.
Also note LSI (Perc OEM) added "Patrol Reads" to the last few raid adapter generations, which running on a regular basis, checks the entire disk surface area of arrays for errors, which greatly cuts down multiple drive failures.
On older generation adapters, Consistency Checks only checked areas in which data was located, the adapter itself only checked areas of data. Multiple drive failures mostly occured due to error build up in areas which no data was located, as in during a rebuild, when the entire arrays disk surface is accessed. In multiple disk failures, too many errors are found in a short period of time and the adapter is fails the disks, instead of marking the sectors as "bad".



........................................
Chernobyl disaster..a must see pictorial
 
I REALLY prefer to keep the OS on seperate spindles. Why? Because the page file lives on the OS partition by default and I don't want application workload to impact the OS.

For a typical file server, where the data is read far more often than it is written, RAID 5 is fine. RAID 5 has a write penalty of 4. Each write costs 4 IO operations. The general rule of thumb is that if the RAID write penalty is higher than the read/write ratio of your application, then that RAID type is a poor choice. Unless your users are doing something abnormal, you're probably safe. If you have an inquisitive mind and really want to know, then collect the physical disk counters reads/sec and writes/sec and calculate your read/write ratio.

The more spindles you have in a RAID 5 set, the better the performance gets. Random read performance can be characterized as P*(N-1) where P is the performance in IOPS of a spindle and N is the number of spindles in the array. What is interesting is allocation unit size. As allocation unit size increases, the IOPS per spindle only slightly decreases. On a 15K spindle you get about 150 random 4K IOs at a 20ms response time. You get about 145 random 8K IOs on the same spindle. That's almost twice the throughput. You may want to find the average bytes per transaction with perfmon and adjust your allocation unit size accordingly. THe default is 4K. Unfortunately, the allocation unit size is set when you format a disk and changing it is a destructive proccess. If you want to change the allocation unit size, you have to backup, reformat, then restore. That said, if there's a large mismatch between your average IO size and the current allocation unit size, it may well be worth the effort to double or quadruple your throughput.


 
Many thanks to all who responded, for me its a learning experience, and I appreciate your contributions. Its often difficult to find 'best practice' information, and these forums are invaluable for polling other IT professionals opinions.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top