Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

RAID 5

Status
Not open for further replies.

DS4800SAN

Technical User
Sep 22, 2008
4
IN
I would like to know maximum hard disk can be added in RAID 5.

Is there any limitation of RAID 5?


Thanks & Regards
 
I think it depends on the controller card you are using check the documentation of your server/raid controller hw.
It's not really a good idea to have many drives in a RAID-5 array since it cannot survive more than one drive failure without losing the whole array
 
I wouldn't listen to the above advice. If you have a hot spare (one or more) in your array, RAID 5 can tolerate more than one drive failure without a problem.

And besides, anyone that doesn't have at least one drive ON HAND in case of a drive failure NOT counting any hot spares is treading on thin ice.

As soon as you get a failure and rebuild onto a hot spare, get that dead drive removed and replaced. This is your piece of mind. And then re-order that spare to put on the shelf.

If you live by these rules, you'll be pretty safe.
 
goombawaho I agree with what you are saying, but remember.. this is assuming that someone has hot spare drives available. I have seen way too many setups whereby someone has lost 2 drives in a RAID-5 array simultaneously.
 
That's not something I would worry about overly having managed a lot of servers in my day that were RAID 5. Never saw two drives go even remotely close to one another. Besides, with a hot spare on standby, you can lose two drives and not have a problem. But, choose your own worries is what I say.

Go ahead and suggest to him a different RAID level/approach and run it up the flagpole. He asked about RAID5 and I told him how to prepare for bad disks.
 
Lets not forget some of these drives are getting a little old now, replaced a 6607 disk a while ago and whilst rebuilding another one went down. To make matters worse soon after that there was a power cut and had to shut system down, 2 more disks failed to spin up on ipl so thats a total of 4 bad disks in one day. got the in overtime tho.
 
Regarding having very old drives/servers in service. That is something that has to be addressed in a different way. It's not a technical problem. You have to get your management to agree to replace servers after a certain age due to the increased probability of failure and the decreased QUICK availability of parts/drives and support.

If you don't, now you're really talking about increasing the chance of server failure. To me, replacing a server every three to four years max is the smart thing to do to avoid these issues.

Management that says "run them until they die" is asking for just that - death. I know it's hard to get them to understand, but you have to liken it to old age in humans. Tell them a four year old server is about like a 60 - 70 year old man or a 10 year old dog. Not ready to die, but not a spring chicken either.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top