In all that follows I am assuming uniform disk size across arrays; I've also used imaginary disk sizes to make the maths easier - who ever heard of a 50GB disk
RAID1
For 4n GB storage you should buy at least 9n GB disk space, but you can get away with 8n. (RAID1 requires physical capacity that is double the logical capacity of the array, plus 1 disk for hot spare in standby)
To illustrate the formula:
200 GB storage (n=50GB) => 9 x 50GB disks required for hot-spare, or 8 x 50GB disks without hot-spare
RAID5
For 4n GB storage you should buy at least 6n GB disk space, but you can get away with 5n. (RAID5 requires physical capacity that is larger than the logical capacity of the array by the size of 1 disk, plus 1 disk for hot spare in standby)
To illustrate the formula:
200 GB storage (n=50GB) => 6 x 50GB disks required for hot-spare, or 5 x 50GB disks without hot-spare
I've recently upgraded an IBM SSA drawer that was split between 2 arrays: 1 x RAID1 (contained my app DB) + 1 x RAID5 (contained my app storage). The upgraded arrangement spreads a single RAID5 array over 15 disks, the 16th providing hot-spare cover.
Result? I've *halved* the time it took to backup my app DB every day. No noticable increase in write times.
HTH.
Kind Regards,
Matthew Bourne
"Find a job you love and never do a day's work in your life.