Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations TouchToneTommy on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

New PC build - best hard drive configuration 4

Status
Not open for further replies.
Oct 7, 2007
6,597
US
I'm building a PC for a customer and I think I know what hard drive configuration I want to use, but wanted to bounce it off everyone. She has online backup so this is about quick recovery or redundancy via RAID1. I will also purchase Macrium to do an image backup. I think all her data and operating system will fit comfortably on a 500GB drive.

1. 250GB boot SSD + single spinning drive for data (OS image backup to spinning drive)
2. 500GB Boot/Data SSD + single spinning drive for image backup destination

3. 250GB boot SSD + RAID1 spinning drives for data (OS image backup to spinning drives)
4. 2 500GB SSDs in RAID1

I guess I'm going to pick option 4 since she doesn't have much data and everything will be on the RAID. Comments?
But what option would you choose if someone had a lot more data, making it too expensive to put it all on RAID1 of SSDs?

"Living tomorrow is everyone's sorrow.
Modern man's daydreams have turned into nightmares.
 
Thanks. I am not interested in the performance aspects in regards to this discussion. I am only interested in your claim that it has a higher failure rate. Every reference I can find (and my limited experience with it) has shown it to have a LOWER failure rate than a single drive. That being a lower failure rate of the most important component: the data contained therein.

With the cost of hard drives for the past 5-10 years, I don't think people are more concerned with having to buy 2 drives instead of one more than security of data. Backup, yes, but redundancy gives a quicker and more direct fail-safe for the most fragile piece of the whole modern/aging computer system. Another 5-10 years, and we'll be talking about "who even uses SATA anymore" and hard drive platters will just be for artistic expression. [wink]

"But thanks be to God, which giveth us the victory through our Lord Jesus Christ." 1 Corinthians 15:57
 
Then single drives, yes. But compared to other RAID types, RAID 1 is the least reliable, particularly because corruption gets replicated across the drives. They are also frequently not configured for boot properly, so the boot drive has a RAID failure, and then the 2nd drive doesn't boot as expected.
Then getting them rebuilt can be very difficult.
In my experience, a single boot/app drive is clean, you can burn an image, and then just recover the image onto a new drive, while all the data sits in another drive, or more preferrably, a RAID 5 array.
That array of course, should be backed up on a regular basis as well.


Best Regards,
Scott
MIET, MASHRAE, CDCP, CDCS, CDCE, CTDC, CTIA, ATS

"Everything should be made as simple as possible, and no simpler."[hammer]
 
Scott
Number 1 reference, user has little knowledge of raid 1, he got his data back anyway.

Number 2....
" However, at the agreement of our support staff, I estimate that anywhere from 25% to 30% of our customers with RAID will call us at some point in the first year to report a degraded RAID array or problem directly resulting from their RAID configuration".

BS percent too high. Yes raids go into verify/degrade, that is part of raid; onboard software raids have issues with power loss or illegal shutdowns, it is not a failure. Much rather have a degraded drive then a single drive fail causing multiple hours of restoration plus data loss.

"The real question is: Is RAID1 really worth being 15-20 times more likely to have a problem? Keep in mind, RAID1 does nothing to protect you from:

1.Accidental deletion or user error
2.Viruses or malware
3.Theft or catastrophic damage
4.Data corruption due to other failed hardware or power loss"

15-20 times more likely to have a problem, that is an absurdly high number. With raid 1 you likely to have 15-20 times less chance of losing your raid data.

Number 3...
Same as number 1, user has no clue, data was not destroyed, he got the raid reinstated

Where are your figures for the highest failure rates? degraded drives are not failures and user ignorance do not count.

"but it's write speed is still slower than that of RAID 5 or 3." Write speed is not slower then raid 5 due to parity creation and writes to multiple drives of raid 5. Never heard of anyone using raid 3. Raid 5 does have a higher read speed.

" Now, take that up to 8, 10 or even 20 drives, and RAID 1 is really a waste."
For future reference 4, 8,10 drives mirrored is referred to as raid 10, which is very fast, overall safer then raid 1, as it has a good possibility it may survive more than 1 drive failure.

" particularly because corruption gets replicated across the drives"
Data gets written in raid 1 to one drive member, once the write is verified it is written to the second member. If the is an error to the 1st drive, the data is NOT written to the second 99% of the time. There are faults called a double fault/ puncture which causes an error to be written to the second drive, but again it is rare. Happens mostly with software based raid, very rare in true hardware raids.




........................................

"Computers in the future may weigh no more than 1.5 tons."
Popular Mechanics, 1949
 
Scott24x7 said:
Then single drives, yes. But compared to other RAID types, RAID 1 is the least reliable, particularly because corruption gets replicated across the drives. They are also frequently not configured for boot properly, so the boot drive has a RAID failure, and then the 2nd drive doesn't boot as expected.
In context of this thread and topic, these thoughts are irrelevant.

1. Advanced RAID - out of context for the discussion. The discussion is simple: Single disk vs RAID 1, that is all. Orig Question asked for choices between predetermined possibilities.
2. RAID never mentioned for Boot drive. Idea here is simple: SSD for Boot drive, RAID only for data.

Please keep it to discussing the topic at hand. It's easy for any of us to stretch things out of scope. We're not looking to resize our discussion like resizing a RAID Array and partition. [wink]

"But thanks be to God, which giveth us the victory through our Lord Jesus Christ." 1 Corinthians 15:57
 
In all my years of using RAID (since 1997), I've never seen data corruption replicated to the second drive or a situation where the OS wouldn't boot if one drive failed.

Raid 1 saves me from rebuilding systems, beats me charging for a system rebuild.
This is why I would use it for a home or home office customer if they wanted to pay for the extra drive.

For business, you're just plumb dumb if you have a server and you don't use SOME FORM of RAID - not necessarily RAID1.

For the record, my question was really about RAID for the OS or OS + data if you have a small amount of data, always assuming you have some good form of backup for the data.

"Living tomorrow is everyone's sorrow.
Modern man's daydreams have turned into nightmares.
 
In all my years of using RAID (since 1997), I've never seen data corruption replicated to the second drive or a situation where the OS wouldn't boot if one drive failed."
Goombawaho, remember there are those of us which reside in different universes where raids are 15-20x more likely to fail, and suffer from the raid maladies you mention quite often. [thumbsup2]

........................................

"Computers in the future may weigh no more than 1.5 tons."
Popular Mechanics, 1949
 
Also, remember, just because you haven't seen it doesn't mean that it won't happen. This is a false sense of security.
I'm not just talking out my keister here... this is a build I have done earlier this year:


For my current office setup, but this walks through the original build in 2012, and then the rebuild in 2017.

Best Regards,
Scott
MIET, MASHRAE, CDCP, CDCS, CDCE, CTDC, CTIA, ATS

"Everything should be made as simple as possible, and no simpler."[hammer]
 
remember there are those of us which reside in different universes where raids are 15-20x more likely to fail, and suffer from the raid maladies you mention quite often.
Technome - Sarcasm towards Scott24x7 or straight comment? I only care about THIS universe.

I'm sort of sorry to have started this. Actually we keep throwing around RAID without specifying. I'm pretty sure there would be agreement that a separate RAID card is better than something like Intel R.S.T. and Windows RAID.
Maybe Separate card >> Intel R.S.T. >> Windows RAID



"Living tomorrow is everyone's sorrow.
Modern man's daydreams have turned into nightmares.
 
Sarcasm

........................................

"Computers in the future may weigh no more than 1.5 tons."
Popular Mechanics, 1949
 
I'm fine to take that either way.
In reality, I have seen environments where RAID does fail more often than you think.
I design and build data centers for a living, where we often have in excess of 100,000 servers in a single building.
If you're used to dealing with small to mid-size office or enterprise, then you won't have seen the scale of these issues.
I'm ok with that.

There are indeed different "universes" of operation. I'm just looking at a wider sample.


Best Regards,
Scott
MIET, MASHRAE, CDCP, CDCS, CDCE, CTDC, CTIA, ATS

"Everything should be made as simple as possible, and no simpler."[hammer]
 
technome: Can you suggest a short paper on setting up a Raid 1? and, maybe outline how to insure the OS is properly addressed, and possibly an hiccups to look out for? It's a tall order, but, I'll revisit using a Raid 1 for redundancy. I may have sold it a little short.

Dik
 
Can you suggest a short paper on setting up a Raid 1? "

There is a ton of short "papers" out there , Google.....raid 1 setup. You need to read many as there is a lot of false info out there. For a desktop, there are a number of good solutions mentioned in the posts above.

For the most reliable and highest performance a true hardware raid is the way to go, for servers my minimum setup is SATA drives 15k rpm with a hotspare, with cold spares in a locked cabinet; at this point I am starting to use SSD drives. Raid 1 in this configuration is extremely reliable, but not super fast, so a raid 10 with a hotspare is best for performance and reliability, as it has a high read rate. In raid 10, reads are obtained from both drives (unlike raid 1), and as mentioned a raid10 may sustain more then 1 drive failure. Raid 10 still has a write penalty delay, but is generally made up by controller cache. Aside from that most server are 80% reads and 20% writes, so reads are the biggest concerns. Another advantage of true hardware raid is the raid system is basically shielded/insulated from the OS. Raid adapters very rarely fail, most reported failures are user induced issues due to lack of knowledge; something goes wrong in the OS or a connection becomes loose, blame it on the raid adapter, then start playing around with the raid configuration without reading the simple manual...false conclusion, its a failure of the adapter...more like brain cell failure.

As to hiccups with hardware raid, most setups are NOT documented properly, personally I use software to backup the configuration and I document. IT should always have spares drives on hand, NEW retail drives, not returned, refurbed, recertified drives, as these are often drives which have failed in array... they will fail again. Most raid adapter have internal utilities to check for "Consistency" and for "drive scrubbing". Drive scrubbing, (Dell calls it Patrol Reads) MUST be used religiously on larger capacity raids, as it checks for errors on the entire surfaces of the drives, the servo, data area and the unused areas of the disks. Consistency Checks only checks the area the data resides on. "Scrubbing" is much more important then Consistency Checks. Without scrubbing, errors primarily build up in the unused areas of disks. During a rebuild, without frequent scrubbing, raid error checking becomes overwhelmed and fails multiple drives causing a raid array failure. Raid is not a backup, so most important is to have a decent backup system. On Active Directory DCs I clone system drives/arrays every 6 months so I have useable clone should anything happen to the AD, I have something to go back to (AD Tombstone life).
At this point I am rambling on, I will stop here.


........................................

"Computers in the future may weigh no more than 1.5 tons."
Popular Mechanics, 1949
 
No, you're not rambling. That was very good dissertation. That has been the fear in using some of the RAID levels as hard drives have gotten bigger and bigger - a second failure during a rebuild to a replaced drive. This is all WAY beyond what we're talking about for home/small business users, but having a spare drive on hand is very smart (not refurbished, not used and pulled because it fell out of the array) because time is of the essence when you need to restore an array. You don't want to have to find an older model hard drive and get it ordered because most likely all the drives were purchased at the same time and are ageing at the same rate so failure of another drive is not far behind the first (theoretically via MTBF).

I would imagine that as we start to use SSDs in RAID they should become more reliable when doing a rebuild.

One more thing. Any device with RAID should have a UPS attached to avoid sudden power outages that could corrupt the data. Nothing worse than a power outage when the controller is trying to write to multiple drives, especially if that data is part of a database or the operating system.

Articles about RAID5 being obsolete and RAID recovery
Link
Link

"Living tomorrow is everyone's sorrow.
Modern man's daydreams have turned into nightmares.
 
Very interesting thread. Thanks for posting the link to your imgur photos of that crazy build, Scott24x7. I didn't look at 100% of it, but pretty neat anyway. And if you're doing as heave of processing, and can afford so much hardware and software to support your needs, then yeah, RAID 1 would seem rather silly, especially w/o a dedicated hardware RAID card.

I'd love to build something like that just to do it. I cannot honestly say I need it bad enough, and I DEFINITELY do not have pockets deep enough to ever do that. Time is yet another matter - that didn't happen overnight, I'm sure.

"But thanks be to God, which giveth us the victory through our Lord Jesus Christ." 1 Corinthians 15:57
 
One more thing. Any device with RAID should have a UPS attached to avoid sudden power outages that could corrupt the data. Nothing worse than a power outage when the controller is trying to write to multiple drives, especially if that data is part of a database or the operating system."

Actually with hardware based controllers with an onboard battery your very safe as once power is restored, the battery backed data finishes writing to the drives.
Agree with onboard raid interfaces needing a BBU for data which is critical. I like the SSD drives as they provide more safety than mechanical drives as the data gets written to the ram so fast there is less time for corruption.

As I mention new RETAIL drives, years ago I tried refurbs, recerts, etc. Most failed once place in a raid. Oddly, many times I would take out a failed drive, run tests , correct any errors. then run them off an HBA, they would run for weeks/months no issues; place them back on a raid controller and they would fail within a short time.

goombawaho, agree with the info in the linkes, especially about raid 6 vs raid 5. Up until now raid 6's write delay was a big negative but with the faster controllers and faster motherboard bus speeds it has come into it's own.


........................................

"Computers in the future may weigh no more than 1.5 tons."
Popular Mechanics, 1949
 
Having a drive fail in a Raid 1 is not a failure. The system keeps running and you simply shutdown the system (if the drive isn't hot swapable), put in a new drive and bring up your system. The previous good drive is synced to the new drive and your up and operational. I had a commercial mainframe 20 years ago with raid 1 on 30 drives. One day the power was out for 4 days and when it came back up we had 8 failed drives. Fortunately none of then were on the same raid pair, we simply ordered 8 new drives, but them into the drive array. issued the command to resync and the raid was fully synced in one hour. While this was going on the company kept using the single drives with no problem.

Bill
Lead Application Developer
New York State, USA
 
Beilstwh - We all know that what you said is the THEORY behind all versions of RAID and particularly RAID1 as I am considering. But Scott24x7 has said that his experience says there is a good chance of data loss due to corruption being duplicated across a set of drives.

"Living tomorrow is everyone's sorrow.
Modern man's daydreams have turned into nightmares.
 
corruption because of drive failure will not propagate across the drives in a raid 1. Corruption done by the operating system or an application will always propagate in all raid configurations. That is the entire reason for raid. multiple copies of all changes made to data on the drive. If you overwrite an oracle datafile (for example), the exact same change will be made in the rest of the raid. If it didn't it wouldn't work very will.

Would I prefer a raid 5 or 10 for a company setup sure, but it is not cost effective for a home office on a limited budget.

Bill
Lead Application Developer
New York State, USA
 
"But Scott24x7 has said that his experience says there is a good chance of data loss due to corruption being duplicated across a set of drives."

No true.

Corruption across a multiple drives is rare, especially across true hardware raids. I have had a couple instance on Dell's hybrid software/hardware, not my setups, but on machines setup by others; not the fault of the person who set it up, but the low end Dell Perc controllers are garbage, true garbage; the higher end Percs are LSI based (Broadcom Sub), LSI is a major raid supplier.
Rather difficult to have a double fault or raid puncture if a controller has automatic "scrubbing" working. Remember in raid 1, data is only written to the second disk once the write to the first drive is verified. If the write is not verified it is not written to the other disk. As the data is written to the second disk, the write is verified. Basically really difficult to have two faults, or two media errors crop up at the same time on multiple disks or have media errors develop on multiple disks supporting a file after the file is written. Obviously if your have a cheap controller with poor error correction and or no "scrubbing" utility, then you have a problem.

As to Intel's software raid motherboard interface, I have yet to get a double fault or puncture with a raid 1.

As to a double fault/puncture... If the raid remains functional, you can recover from the mention issues sometimes by cloning the raid to another drive, but the cloning method can NOT be sector by sector copy. Acronis plus the other big cloning soft wares have been cloning this way for a while instead of "sector by sector". Basically during the cloning it is a file copy, with a mechanism to ignore/bypass disk errors/corrupt files and proceed anyway. Once the clone is completed and booted with, the original raid with the issue is reconfigure/initialize or better still new disks are used in a new raid. Then the cloning is reversed to the new array. So a double fault or puncture does not carry over to te new array. I am sure it does not work every time but has worked for me.


........................................

"Computers in the future may weigh no more than 1.5 tons."
Popular Mechanics, 1949
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top