Amznilla...
Your original setup was the best for speed, If you lower the number of drive pairs in the raid 10, you sacrifice speed and you lower the size of the raid 10 capacity, so I would go keep your original setup. If you had ten or more drive slots, then I might go with the raid 1 and the raid 10.
Your second setup does provide a bit more safety, as the OS would be isolated from the data array, and having two completely separate arrays provides multiple spindle sets, a plus in having SQL tmp/log files running on different spindles from the SQL database but In my opinion if you run the SQL tmp/logs files on the raid 1, the performance of the raid 1 ( being so much slower the the raid 10, especially a raid 10 with more then the minimum 4 drives) does not make it worthwhile, basically you not gaining any performance due to the extra spindle set, you are lowering it. Raid 1 only READS from the fastest drive in a raid 1 (controller chooses it) and only WRITES to that drive first, once the data is committed, it then WRITES to the second drive...so raid 1 has inheritant delays. The raid adapters cache makes up for much of the delays, but not if the server is slightly stressed , at that point the cache is flooded and does not make up for the delays; if a server's disk lights go solid, the cache is likely flooded. If only a few entry are inputted in a short time or a small report is done, then the raid adapters cache makes up for the raid 1 delays; if the server disk lights are flickering, on/off, then the cache is not likely flooded.
Now, if you had more drive bays, and unlimited resources, then you could have a 4 drive raid 10 for the OS/programs/tmp/log files and a 6 or 8 drive raid 10 for the SQL, then the multiple spindle set would make sense. Or if you had an SSD raid 1 and a 6-8 drive raid 10, or Cachecade v2 setup....I'm getting carried away.
Most of my clients can't afford raid 10, so they go for slower raid 5, and most have SQL, speed is fast, so your original setup will be very fast.
Choosing the best stripe size is difficult. To find the fastest stipe size would require benchmarking with your data, as it is dependant upon the data chunk size. Choosing different stripe sizes is a trade off, if you choose a size which increases your database speed, it will lower your OS and other program's/data speed. If your server will
only be used for SQL then it would be worth benchmarking to find the optimal stripe size. If the server is going to be used for SQL and other data, such as Word/Excel/other program data then the best choice is to go with the DEFAULT size. Even if it is strictly for SQL, and you stick with the default size, you will still have a fast server.
If you do want to play with the stripe size, unless things have change since the last time I tweaked the stripe size and benchmarked the results ( been a few years), the stripe size can be changed without destroying the array in raid 10 (not possible with raid 5), as long as you do not allow the array to initialize. Obviously do not work with live data, work with a copy of your data. If you do want to play with the stripe size, by all means use bench mark programs, but also check you speed by going into your SQL program, and run what ever you do on a normal basis to see the reponsiveness. Benchmark results mean little if the areas in SQL you use slow down.
Now to really add complexity to the performance issue, google IBM Cachecade v2 performance
........................................
Chernobyl disaster..a must see pictorial