RAID 0 failure rate
Although RAID 0 was not specified in the original RAID paper, an idealized implementation of RAID 0 would split I/O operations into equal-sized blocks and spread them evenly across two disks. RAID 0 implementations with more than two disks are also possible, though the group reliability decreases with member size.
Reliability of a given RAID 0 set is equal to the average reliability of each disk divided by the number of disks in the set:
That is, reliability (as measured by mean time to failure (MTTF) or mean time between failures (MTBF) is roughly inversely proportional to the number of members — so a set of two disks is roughly half as reliable as a single disk. In other words, the probability of a failure is roughly proportional to the number of members. If there were a probability of 5% that the disk would fail within three years, in a two disk array, that probability would be upped to Pr(at least one fails) = 1 - Pr(neither fails) = 1 - (1 - 0.05)^2 = 0.0975 = 9.75\,\%.
The reason for this is that the file system is distributed across all disks. When a drive fails the file system cannot cope with such a large loss of data and coherency since the data is "striped" across all drives (the data cannot be recovered without the missing disk). Data can be recovered using special tools (see data recovery), however, this data will be incomplete and most likely corrupt, and recovery of drive data is very costly and not guaranteed.