RAID 10 is a striped mirror. If you had two drives, it would be simply a mirror - RAID 1. The minimum number of drives would be 4.
With that said, let's take a look at the performance aspects of the equation. A single 10K drive formatted NTFS is capable of ~ 120 IO/sec given a 4K IO size. It's important to know the request size of the application you intend to run, because it forms the basis of being able to accurately predict the load an array will handle. For Exchange or Lotus Notes, the request size is 4K. For SQL it's 8K. If you don't know the request size of your application, you can find it using the perfmon counter - Physical Disk - Average disk bytes/transfer.
Next, we need to know something about the performance characteristics of different RAID types. For RAID 0 it's p*n, where p is the IO/sec of a single drive, and n is the number of drives in the array. For RAID 1 or 10, we have to differentiate between read and write performance. For reads, the formula is the same as it is for RAID 0. For writes, it's p*n/2. RAID 5 gets even trickier. For RAID 5, read performance is p*n-1. For write performance, it p*n/4. When a write to RAID 5 occurs, we write the data, read the data, read the parity, calculate the parity, then write the parity.. whew!
In all of the above calculations, we didn't take cache into account. The effectiveness of cache depends on the IO pattern, the algorithm, and the ammount of cache. For sequential IO, write caching is very good, and can effectively negate the RAID 1/10 write penalty. For RAID 5, at a minimum, write caching can negate the need to read the data back when calculating parity, effectively changing the formula to p*n/3. For sequential writes to RAID 5, increasing the write cache can substantially increase performace beacuse the parity can also be cached. The problem occurs with random data sets. For large random data sets, caching looses it's effectiveness. We still want some minimum amount of qrite caching so that on RAID 1/10 we get around the write penalty, and on RAID 5 we al least calculate the parity in cache.
So how do you set the cache ratio? For RAID 1/10 it's simple. You can use the disk writes/sec, disk reads/sec, and disk transfers/sec counters to figure out your read/ write ratio. The cache ratio on RAID 1/10 would simply be the read/write ratio. For RAID 5, it's a little more complicated. You take the read write ratio, and you must transform it. Say you have a 1:1 ratio. With five reads, that's 5 reads. For 5 writes, that's 10 reads and 10 writes. That brigs us to 15:10, which is a 60/40 split. For a transactional database application, you usually see a 3:1 ratio of reads to writes. The split comes out closer to 80:20 in this case.
Well, I'm out of time. Next time we can talk about queueing, IO completion times, sealing wax, and strings.
Hope this helps
John
MOSMWNMTK