Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

hard drive configuration question 3

Status
Not open for further replies.

JacobTechy

Programmer
Apr 14, 2005
181
US
I am currently looking for a new server hard drive. I have been given the following hard drive configuration but not sure of the terminology and benefits therefore if this is a good solution for our company. I told our supplier that we currently have 47GB of used space in our current server hard drive. From what I see on the specs he is only giving us two 36GB mirrored so how would this be enough space. Unless the 72 GB will take some of the storage also.
Also he said 3 of the stripped is good but I dont know why. So I dont understand what is stripped and hot spare hdd.

Please advice:


Intel SC5400 Server Chassis - Windows 2003 Server

Intel SAS RAID Drive Subsystem (6 Drives Total - RAID 0+1)
Intel® RAID Controller ROMB SAS RAID Controller
Intel High-performance SAS SCSI, PCI-X RAID
Intel Raid Hot Swap Drive Cage - 6 Bay
(2) 36GB SAS SCSI Hard Drives 15.4K RPM (Mirrored)
(3) 72GB SAS SCSI Hard Drives 15.4K RPM (Stripped)
(1) 72GB SAS SCSI Hard Drives 15.4K RPM (Hot Spare)


 
What are your business requirements? How important is performance? How many users are accessing your data? How much data GROWTH do you expect in the next 12, 24, 36 months? I'll tell you, for MOST businesses, it sounds like you're being fleeced. I mean, it's a good system - in fact, it's an EXCELLENT system... but MOST businesses would never come close to needing the kind of performance this is going to give you... it's like buying a Hummer to drive 2 miles to the train station.

Now that said, what you'd be getting is 2x36 GB hard drives setup as a mirror (RAID 1) - PRESUMABLY, you would install Windows 2003 on the RAID 1 36 GB drives.

THEN, you would have another three 72 GB drives combined into a RAID 5 which would provide you with 144 GB of storage (RAID 5 uses 1 disk worth of space for parity information so if any of the 3 disks fail, the system still runs, only slower, until the failed disk is replaced).

Finally, a hot spare is a disk that sits in the server and runs but is NOT used for storage. In the event that any of the other disks fail, the hot spare would be instantly used to replace the failed disk. In the configuration above, if EITHER a mirrored disk failed or a RAID 5 disk failed, the hot spare would replace it automatically. It's generally a good idea to have one.

With RAID 1 and RAID 5, you can have ONE disk fail in the RAID volume and still keep running. The hot spare allows for TWO disks to fail, PROVIDED they don't fail at the same time or within a few hours of each other.
 
I'll second the opinion that this is a very nice system too! Fault tolerance (having a hot spare ready) is definitely a consideration in the design, and that is important to ANY business.

The question is, do you NEED high-level performance? Although the fault tolerance is a necessity, there are cheaper options (SATA over SCSI, non-striped solutions, etc) that can save quite a bit. If you have a significant amount of people in the company accessing this server at any given time, and many rely on it for their everyday activities, then you should consider going with it even if it's a bit on the overkill side for now. It will give you some headroom to accomodate future increases in the number of users, applications, etc.

~cdogg
"Insanity: doing the same thing over and over again and expecting different results." - Albert Einstein
[tab][navy]For general rules and guidelines to get better answers, click here:[/navy] faq219-2884
 
cdogg said:
there are cheaper options (SATA over SCSI, non-striped solutions, etc)

Personally I would never build another server with SATA drives, the possible exception would be with a killer card like a 3ware PCI-X. I've just had too many problems with onboard SATA controllers. SAS or SCSI is the only way to fly, with an appropriate controller card. 10K drives would be less expensive and run cooler.

What confuses me is the spec:

JacobTechy said:
(6 Drives Total - RAID 0+1)

RAID 0+1 (RAID 10) is a single array with striping AND parity, the drives spec'ed are obviously intended for dual arrays, RAID 1 (for OS & apps) and RAID 5 (for data), since they are different sizes. Unless both arrays are on the same card (the specs seem to mention two controllers) the hot-swap drive will be for the RAID 5 array only. If only one controller is used then the hot spare could be available to both arrays. I like hot spares.

I agree this system would be way overkill for most businesses, but then again MY server is way overkill for my business...better to err on the side of speed.

The key question in my mind is how long did it take you to accumulate that 47GB of data? Do you expect that 47GB to double, triple, or more over the next (5) years?

Another idea would be to give Dell or HP a call and see what they recommend.

Tony
 
Thanks for all your input. It was very educational. I have a couple of follow up questions due to my unexperience in the area of computer hardware. I failed to mention that we plan on possibly buying two of these servers with the same hdd configuration with 4GB of memory each. One server for DC/file server/sharepoint and one for our SQL databases (5GB of data worth) which will also run our sql based time/labor tracking system.

lwcomputing
THEN, you would have another three 72 GB drives combined into a RAID 5 which would provide you with 144 GB of storage (RAID 5 uses 1 disk worth of space for parity information so if any of the 3 disks fail, the system still runs, only slower, until the failed disk is replaced).

Are you saying of the 3 stripped drives we could only use 2 of them for storage because the 3rd one is used for something else. Also if one of the Raid 5 disks failed it would still run slower even if the hot spare takes over. Also on the failed Raid 5 disk how would we access the data on it since it failed. If the raid 5 disk that failed was the one with the parity infomation would the rest of the raid 5 disk be unaccessible.

Let me give you further history on our current server. Our current server purchased 7 years ago, has had 2 of the memory slots fail so I am concerned that the motherboard will go out. Also since we have both our DC and SQL db on this one server it is running very slow with only 1.5GB Ram availble. We have about 50 users now who access this one server a lot for sql, file and printer sharing and our labor/time keeping system (sql based) is also on this system. Also on our new DC server with windows 2003 i also plan on installing and utlizing Sharepoint v2.

Also when we first purchased our current server we started off with 2x37GB mirrored hdd and we ran out of space about 2 years ago so we purchased 2x60GB mirrored hdd. We have users who are starting to save more files on their network "home" drive since its backed up every nite. We have one user who has 8GB worth of his data ever since his local hdd crashed. This same hdd space issue occured in our other office location aswell.

Tony - I will contact Dell to see what they have to see what they recommend. Thanks.
 
A mirror takes 50% of the total disk space and makes it available for use. So if you have two 73 GB drives, then only one drive worth of space is available to you (the other drive is keeping a constant identical copy). So if you have 4 drives, both are mirrored, that's a total of 146 GB of usable space - each 73 GB drive is mirrored, so if you have 4x73=292, then you only get to use 2x73=146.

With a RAID 5, you have n-1 disks available for use, where n is the number of drives you have total. Meaning if you have 4x73 GB drives in a RAID 5, then 3x73=219 GB of usable space. If you have 10 drives, then you have 9x73=657 GB available.

The RAID 5 volume will operate slower while it is in a degraded state (degraded meaning one of the drives that is officially part of the RAID 5 has failed). A controller with a hot spare will take that hot spare and repair the RAID 5 volume, restoring it to full speed upon completion of the resync required to get the missing data on the spare disk. At which point you no longer have a hot spare (it's been used). So you replace the failed disk and that disk becomes your new hot spare (this CAN work a little differently; on high-end SANs and possibly some HIGH END RAID controllers, when you replace the failed disk it will rebuild the RAID onto that replaced disk, restoring the old hot spare back to hot spare status. More than likely your RAID controller won't do this).

Also, note, I said one disk worth of space - not one disk - stores parity information. You might want to review these links on RAID to get a better understanding of it:

What kind of processors are in your existing server? What kind of processors are in your new server? Moores law states that processors double in power ever 18-24 months. If we average that and say once ever 21 months over 7 years, then your new server, based on processor only, will be 16x faster (give or take). And that's not factoring in memory enhancements, bus enhancements, dual cores, hyperthreading, or other technologies that will improve system performance. My point is, if your business has been getting along with what you currently have for 7 years then this server config (15K RPM, SCSI) will likely be overkill. Again, it will ABSOLUTELY work for you... but I would expect that disk config to run you $2000-$2500 on the price of the server.

Instead, I would probably go SATA drives, even Western Digital RE drives which are designed to be used in RAID volumes.

I wouldn't worry about the motherboard going - they are usually quite solid. Things can always fail, but that's why on a server, you get a 3 year warranty and same day 4 hour response.

And when it comes to disk space, don't forget, you don't NEED to put it ALL inside the server. I have several clients with eSATA controllers or SCSI controllers that use "Direct Attach Storage" (DAS). These enclosures can be fairly cheap and otherwise EASY to move to new servers later. So it can be fairly cheap and economical to add more space when you need it. (When I add these things, I usually need to get a controller, a cable, an enclosure and the disk drives. This is almost always cheaper than getting a small NAS device - which I DON'T recommend).
 
I know this is a bit off-topic, but I just want to note that "Moore's Law" specifically talks about the number of transistors on a CPU die/wafer. Moore back in the 70s said that it would roughly double once every two years for at least a decade. It turns out that it held up for a lot longer. But the point is that this has nothing to really do with CPU "power" which is a loose term.

The Pentium 4 when it first came out, for example, bumped the transistor count up tremendously (more than triple the size of the Pentium III). However the performance of the Pentium 4 was barely an improvement at first, and in many cases, lost to the Pentium III in some benchmarks.

The future holds multi-core processors as the standard benchmark, and the real improvements are going to start coming on the FSB and memory controllers. Performance is going to have little to do with the transistor count and more to do with the way software is written.

Just another 2¢

~cdogg
"Insanity: doing the same thing over and over again and expecting different results." - Albert Einstein
[tab][navy]For general rules and guidelines to get better answers, click here:[/navy] faq219-2884
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top