Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

CPU wait states P660/AIX/Oracle 10g

Status
Not open for further replies.

reclspeak

IS-IT--Management
Dec 6, 2002
57
GB
Hi,

I have a P660 with 4 x 750 Mhz R64's, 16GB RAM & hosting a single Oracle 10g instance. I'm performing a capcity exercise for anticipated future workload increase.

I believe the server is I/O bound, but I don't know wat else to do to confirm this assertion. Would anyone be interested in checking my conclusion or perhaps suggesting other avenues to pursue to back it up or disprove it.

The server isn't paging (at all) and the run/wait queues are normally very low (normally 1-1.5/0 respectively).

Processor threads are sometimes predominantly bound to one CPU but are normally shared almost equally amongst all four CPU's. However either way each CPU shows what I think is an excessive amount of time in "Wait" state, even when load is low.

Here's some vmstat stuff, with the top row of data removed (po and pi are "0") and there is plenty of fre(e) pages, on an occasion when all 4 CPU's were equally loaded;

System Configuration: lcpu=4 mem=16384MB

run waitqueue user system idle wait
queue
1 1 10 25 4 60
1 0 8 7 6 79
1 0 8 7 10 75
1 0 8 6 4 82
1 0 7 9 6 77

The CPU's are spending an excessive amount in Wait ticks (it seems) which (I believe) in AIX 5.2 is due to waiting on disk/nfs I/O

However the disks show little I/O but the fibre adapter does;

Name %busy read write xfers Disks Adapter-Type
fcs0 448.0 2048.0 256.0 KB/s 256.0 62
fcs1 0.0 0.0 0.0 KB/s 44
(skipped internal scsi)
TOTALS 3 adapters 2048.0 256.0 KB/s 256.0 109 TOTAL(MB/s)=2.2

Not sure how the percentage "448" is arrived-at.

This server has the Oracle binary and table space-hosting filesystems presented to the server through Hitachi luns, through one volume group, through one fiber HBA (fcs0). iostat summerises as such;

System configuration: lcpu=4 disk=109

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.7 131.3 23.3 3.6 48.7 24.4

I can identify the busiest disks - so I guess the next step is to see if it is possible to split the read/writes so that at least redo log files are written-to via another path.

Any thoughts or suggestions would be gratefully received.
 
can you show the filesystems distributions!

lsvg

lsvg -l vgname

lspv -l hdiskX

iostat 1 10

Regards,
Khalid
 
Just two vg's - root with two mirrored disks and a data vg;

# lsvg -l <data vg>

LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
lv00 jfs2log 1 1 1 open/syncd N/A
lv01 jfs2 320 320 1 open/syncd /usr/<app>
u10lv jfs2 256 256 1 open/syncd /u10
dumplvjfs2 126 126 1 open/syncd /usr/dump
loglv04jfs2log 1 1 1 open/syncd N/A
u20lv jfs2 6080 6080 21 open/syncd /u20
paging01 paging 192 192 1 open/syncd N/A

hdisk77:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
u20lv 433 433 87..86..86..87..87 /u20
lv00 1 1 00..01..00..00..00 N/A

hdisk78:
u20lv 114 114 87..00..00..00..27 /u20
lv01 320 320 00..87..86..87..60 /usr/syntel
hdisk79:
u20lv 178 178 87..00..00..04..87 /u20
u10lv 256 256 00..87..86..83..00 /u10
hdisk80:
u20lv 308 308 87..00..47..87..87 /u20
dumplv 126 126 00..87..39..00..00 /usr/dump
hdisk81:
u20lv 433 433 87..86..86..87..87 /u20
loglv04 1 1 00..01..00..00..00 N/A
hdisk62:
u20lv 434 434 87..87..86..87..87 /u20
hdisk63:
u20lv 434 434 87..87..86..87..87 /u20
hdisk64:
u20lv 434 434 87..87..86..87..87 /u20
hdisk65:
u20lv 252 252 51..50..50..50..51 /u20
hdisk66:
u20lv 252 252 51..50..50..50..51 /u20
hdisk67:
u20lv 252 252 51..50..50..50..51 /u20
hdisk68:
u20lv 252 252 51..50..50..50..51 /u20
hdisk69:
u20lv 252 252 51..50..50..50..51 /u20
hdisk70:
u20lv 252 252 51..50..50..50..51 /u20
hdisk71:
u20lv 252 252 51..50..50..50..51 /u20
hdisk72:
u20lv 252 252 51..50..50..50..51 /u20
hdisk73:
u20lv 252 252 51..50..50..50..51 /u20
hdisk75:
u20lv 252 252 51..50..50..50..51 /u20
hdisk76:
u20lv 252 252 51..50..50..50..51 /u20
hdisk74:
u20lv 106 106 06..50..50..00..00 /u20
hdisk8:
paging01 192 192 00..00..86..87..19 N/A
hdisk9:
u20lv 434 434 87..87..86..87..87 /u20

Well, I think we've got the basic idea - everything to the same volume group and near enough everything defined to run to one LV, up and down one HC HBA.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top