We have a statistical application called SAS running on AIX connected to a cx380. Many users log into the system to do analysis on data in our Oracle data warehouse. Data is sucked out of the Oracle tables and inserted into temporary SAS files in a file system called /saswork. The data in /saswork is sorted, merged, summed, averaged etc.
Currently the /saswork file system is sitting on a 7+1 Raid 5 with 10k drives. If each drive can manage 120 IOPS then the whole RG can manage 840 IOPS. Problem is that at certain times they are pushing over 2400 IOPS. This causes cache to fill up and forced flushes etc. Should I just add another 7+1 and bind new LUNs there? I would then add the new LUNs to the AIX Volume group. I'm familiar with striped meta LUNs on the DMX but don't know if I can stripe across both Raid Groups on the CX.
Another concern is all the parity writes that are occuring. Would I be better off with an 8+8 Raid 1/0? Mabye I would have much fewer IOs because parity is not written. If Analyzer shows that the Raid group is doing 2400 IOPS are some of those IOPS actually parity writes? Any thoughts, suggestions would be appreciated.
Currently the /saswork file system is sitting on a 7+1 Raid 5 with 10k drives. If each drive can manage 120 IOPS then the whole RG can manage 840 IOPS. Problem is that at certain times they are pushing over 2400 IOPS. This causes cache to fill up and forced flushes etc. Should I just add another 7+1 and bind new LUNs there? I would then add the new LUNs to the AIX Volume group. I'm familiar with striped meta LUNs on the DMX but don't know if I can stripe across both Raid Groups on the CX.
Another concern is all the parity writes that are occuring. Would I be better off with an 8+8 Raid 1/0? Mabye I would have much fewer IOs because parity is not written. If Analyzer shows that the Raid group is doing 2400 IOPS are some of those IOPS actually parity writes? Any thoughts, suggestions would be appreciated.