Hello,
We are running a statistical application called SAS on AIX 5.3 connected to a cx380 with 8GB cache on each SP. There is a temporary/work file system called /saswork. Users kick off huge queries and much of the sorting etc is done in /saswork. Currently this file system sits on a 3+1 RAID 5. At times the file system does over 1200 IOPS and the drives can not keep up and cache is getting flooded and I end up with way too many forced flushes. This seems to be impacting other applications that share the array.
If each of my 146GB 10K drives can do an average of 120 IOPS then I should have at least 10 drives to handle 1200 IOPS without flooding cache.
The file system is made up of 4 LUNs but LUN16 gets most of the IOs. This is the first 100 GB LUN in the file system and it seems like most jobs can get all their work done on this first LUN without needing the others. This is by far the busiest LUN on the array and it tends to bog down SPA. When I redo this I will use AIX Logical Volume striping so that IOs will be spread equally accross all four LUNs - 2 on SPA and 2 on SPB.
I have 12 available 146GB drives (also have thirty 300GB drives but don't know if I want to use those big guys as I don't need much space). Should I do a 10+1 RAID 5? That seems like a lot of drives. Should I do two seperate 5+1 RAID groups? If I get a few more drives maybe I could do two 6+1 or 7+1.
I have never used multiple RGs for one file system. Are there performance impacts? I could have LUN1 and LUN3 in the first RG and LUN2 and LUN4 could be in the second. As heavy writes are occuring they would be going to all four LUNs in both RGs. Does this make sense or should I stick to a single RG.
Thanks
Brad
We are running a statistical application called SAS on AIX 5.3 connected to a cx380 with 8GB cache on each SP. There is a temporary/work file system called /saswork. Users kick off huge queries and much of the sorting etc is done in /saswork. Currently this file system sits on a 3+1 RAID 5. At times the file system does over 1200 IOPS and the drives can not keep up and cache is getting flooded and I end up with way too many forced flushes. This seems to be impacting other applications that share the array.
If each of my 146GB 10K drives can do an average of 120 IOPS then I should have at least 10 drives to handle 1200 IOPS without flooding cache.
The file system is made up of 4 LUNs but LUN16 gets most of the IOs. This is the first 100 GB LUN in the file system and it seems like most jobs can get all their work done on this first LUN without needing the others. This is by far the busiest LUN on the array and it tends to bog down SPA. When I redo this I will use AIX Logical Volume striping so that IOs will be spread equally accross all four LUNs - 2 on SPA and 2 on SPB.
I have 12 available 146GB drives (also have thirty 300GB drives but don't know if I want to use those big guys as I don't need much space). Should I do a 10+1 RAID 5? That seems like a lot of drives. Should I do two seperate 5+1 RAID groups? If I get a few more drives maybe I could do two 6+1 or 7+1.
I have never used multiple RGs for one file system. Are there performance impacts? I could have LUN1 and LUN3 in the first RG and LUN2 and LUN4 could be in the second. As heavy writes are occuring they would be going to all four LUNs in both RGs. Does this make sense or should I stick to a single RG.
Thanks
Brad