lwcomputing,
I've been reading your article, but I believe that there is a mistake relating to RAID 10. You describe it as a pair of RAID 0 arrays that are mirrored, when in actuality it is a RAID 0 that is striped across a series of mirrors. For example, say that you have 8 hard disks. Stripes are designated S, Mirrors are designated M. In your example, RAID 10 would look like:
M1 M2
S1 S1
S2 S2
S3 S3
S4 S4
But RAID 10 actually looks like:
S1 S2 S3 S4
M1 M1 M1 M1
M2 M2 M2 M2
There are a couple of reasons for this. Firstly, someone who would choose RAID 10 over RAID 5 would do so primarily for increased read and write performance. In the event of a drive failure under your model, you lose functionality in one stripe set, which means that half of your disks are now totally useless. You are now reading and writing at 50% of your previous rate. Under the actual RAID 10 model, if there is a single disk failure then you only lose the functionality from that single disk, and in our example the read/write throughput is only diminished around 12.5% (1/8th). The larger your RAID 10 array, the smaller the performance degradation of a failed drive.
The second reason why someone wouldn't use a mirror of stripe sets instead of striping across mirrors is the potential for catastropic failure. In either method the system could theoretically lose up to half of their disks and still not suffer catastrophic data loss. However, the chances are far greater of catastrophic loss in your model versus mine. Again, using the example above:
Lets say that with your model you lose disk M1S1. You still have a functional array with the remaining disks, and as long as any subsequent failures occur on the M1 side you are safe. However, a single failure on the M2 side will kill your array. You have a 4 in 7 (roughly 57%) chance that a subsequent failure will be on the M2 side and therefore result in total data loss.
With the actual RAID 10 model, lets say that you lose disk S1M1. Again, you still have a functional array with the remaining disks, but in this case as long as any subsequent failures do not occur on the S1M2 you are safe. Only a failure on S1M2 will result in data loss. In this case you have a 1 in 7 (roughly 14%) chance that a subsequent failure will result in total data loss.
So not only is the actual RAID 10 model more fault tolerant than what you have described as RAID 10, it also provides greater performance in failed states than your model. And the best thing about RAID 10 is that it scales very well with more disks. In a rack a standard 4U shelf in a drive array will have room for 15 disks. In a RAID 10 that gives you 7 pairs of disks plus a hot spare. Even without the hot spare, you would only have a 1 in 13 (roughly 7.5%) chance that a subsequent failure would result in data loss. The percentage gets much smaller the more disks you add. And while the percentage also gets lower under your model as you add more disks, the percentage will always remain higher than 50%.