Hi,
I have a P660 with 4 x 750 Mhz R64's, 16GB RAM & hosting a single Oracle 10g instance. I'm performing a capcity exercise for anticipated future workload increase.
I believe the server is I/O bound, but I don't know wat else to do to confirm this assertion. Would anyone be interested in checking my conclusion or perhaps suggesting other avenues to pursue to back it up or disprove it.
The server isn't paging (at all) and the run/wait queues are normally very low (normally 1-1.5/0 respectively).
Processor threads are sometimes predominantly bound to one CPU but are normally shared almost equally amongst all four CPU's. However either way each CPU shows what I think is an excessive amount of time in "Wait" state, even when load is low.
Here's some vmstat stuff, with the top row of data removed (po and pi are "0") and there is plenty of fre(e) pages, on an occasion when all 4 CPU's were equally loaded;
System Configuration: lcpu=4 mem=16384MB
run waitqueue user system idle wait
queue
1 1 10 25 4 60
1 0 8 7 6 79
1 0 8 7 10 75
1 0 8 6 4 82
1 0 7 9 6 77
The CPU's are spending an excessive amount in Wait ticks (it seems) which (I believe) in AIX 5.2 is due to waiting on disk/nfs I/O
However the disks show little I/O but the fibre adapter does;
Name %busy read write xfers Disks Adapter-Type
fcs0 448.0 2048.0 256.0 KB/s 256.0 62
fcs1 0.0 0.0 0.0 KB/s 44
(skipped internal scsi)
TOTALS 3 adapters 2048.0 256.0 KB/s 256.0 109 TOTAL(MB/s)=2.2
Not sure how the percentage "448" is arrived-at.
This server has the Oracle binary and table space-hosting filesystems presented to the server through Hitachi luns, through one volume group, through one fiber HBA (fcs0). iostat summerises as such;
System configuration: lcpu=4 disk=109
tty: tin tout avg-cpu: % user % sys % idle % iowait
0.7 131.3 23.3 3.6 48.7 24.4
I can identify the busiest disks - so I guess the next step is to see if it is possible to split the read/writes so that at least redo log files are written-to via another path.
Any thoughts or suggestions would be gratefully received.
I have a P660 with 4 x 750 Mhz R64's, 16GB RAM & hosting a single Oracle 10g instance. I'm performing a capcity exercise for anticipated future workload increase.
I believe the server is I/O bound, but I don't know wat else to do to confirm this assertion. Would anyone be interested in checking my conclusion or perhaps suggesting other avenues to pursue to back it up or disprove it.
The server isn't paging (at all) and the run/wait queues are normally very low (normally 1-1.5/0 respectively).
Processor threads are sometimes predominantly bound to one CPU but are normally shared almost equally amongst all four CPU's. However either way each CPU shows what I think is an excessive amount of time in "Wait" state, even when load is low.
Here's some vmstat stuff, with the top row of data removed (po and pi are "0") and there is plenty of fre(e) pages, on an occasion when all 4 CPU's were equally loaded;
System Configuration: lcpu=4 mem=16384MB
run waitqueue user system idle wait
queue
1 1 10 25 4 60
1 0 8 7 6 79
1 0 8 7 10 75
1 0 8 6 4 82
1 0 7 9 6 77
The CPU's are spending an excessive amount in Wait ticks (it seems) which (I believe) in AIX 5.2 is due to waiting on disk/nfs I/O
However the disks show little I/O but the fibre adapter does;
Name %busy read write xfers Disks Adapter-Type
fcs0 448.0 2048.0 256.0 KB/s 256.0 62
fcs1 0.0 0.0 0.0 KB/s 44
(skipped internal scsi)
TOTALS 3 adapters 2048.0 256.0 KB/s 256.0 109 TOTAL(MB/s)=2.2
Not sure how the percentage "448" is arrived-at.
This server has the Oracle binary and table space-hosting filesystems presented to the server through Hitachi luns, through one volume group, through one fiber HBA (fcs0). iostat summerises as such;
System configuration: lcpu=4 disk=109
tty: tin tout avg-cpu: % user % sys % idle % iowait
0.7 131.3 23.3 3.6 48.7 24.4
I can identify the busiest disks - so I guess the next step is to see if it is possible to split the read/writes so that at least redo log files are written-to via another path.
Any thoughts or suggestions would be gratefully received.