Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Performance on J40 1

Status
Not open for further replies.

Deak

Technical User
Nov 9, 2001
101
US
Hi folks,
My problem is with 2 hard drives taking a beeting. I cant decide if its processor bound or Drive problems. Does anyone have any suggestions. Here is some info regarding my system
AIX 4.3.3.06
tty: tin tout avg-cpu: % user % sys % idle % iowait
21.0 3525.7 11.4 30.6 43.6 14.4

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 38.3 162.5 38.6 12 476
hdisk1 1.3 4.0 1.0 4 8
hdisk5 43.0 96.6 27.3 36 254
hdisk8 57.9 157.8 42.3 220 254
hdisk6 0.0 0.0 0.0 0 0
hdisk2 3.0 2.7 0.7 8 0
hdisk9 0.0 0.0 0.0 0 0
hdisk7 0.7 4.0 1.0 8 4
hdisk3 2.7 18.6 4.7 0 56
cd0 0.0 0.0 0.0 0 0

tty: tin tout avg-cpu: % user % sys % idle % iowait
18.3 3420.5 9.7 34.7 42.2 13.5

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 31.0 138.5 32.0 8 408
hdisk1 0.3 1.3 0.3 4 0
hdisk5 55.3 134.2 35.0 28 375
hdisk8 63.9 180.8 46.6 168 375
hdisk6 0.0 0.0 0.0 0 0
hdisk2 2.0 2.7 0.7 8 0
hdisk9 0.0 0.0 0.0 0 0
hdisk7 1.3 5.3 1.3 4 12
hdisk3 4.0 18.6 4.7 0 56

hdisk5:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
lv03 325 325 00..108..108..108..01 /m4
#
hdisk8:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
lv03 325 325 00..108..108..108..01 /m4
#



Page Space Physical Volume Volume Group Size %Used Active Auto Type
paging02 hdisk7 medvg2 192MB 1 yes yes lv
paging01 hdisk2 medvg2 320MB 1 yes yes lv
paging00 hdisk1 rootvg 720MB 4 yes yes lv
hd6 hdisk0 rootvg 768MB 4 yes yes lv

Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 57344 23748 59% 2775 10% /
/dev/hd2 737280 245048 67% 19705 11% /usr
/dev/hd9var 212992 168256 22% 1994 4% /var
/dev/hd3 221184 214040 4% 44 1% /tmp
/dev/hd1 16384 15484 6% 107 3% /home
/dev/lv00 2195456 417620 81% 21855 4% /src
/dev/lv01 2129920 115776 95% 3059 1% /m3
/dev/lv02 6144000 1274980 80% 2160 1% /m2
/dev/lv03 5324800 599960 89% 655 1% /m4


# lsdev -Cc processor
proc2 Available 00-0Q-00-00 Processor
proc3 Available 00-0Q-00-01 Processor
proc4 Available 00-0R-00-00 Processor
proc5 Available 00-0R-00-01 Processor
proc6 Available 00-0S-00-00 Processor
proc7 Available 00-0S-00-01 Processor


memory bootinfo -r
2097152




Does anyone have any suggestions or thoughts on this Matter. Total users are 150 or so.

I am not convinced even a New system would solve it since this system has 6 processors. Help
 
"Identifying the Performance-Limiting Resource" describes techniques
for finding the bottleneck.

Here are general rules of thumb. General guidelines.

run vmstat 5 5 The fileset to load for vmstat is bos.acct.

you are cpu bound if vmstat sys and usr are constantly > or = 80

If run > 2.5 and CPU bound probably need another processor (the r column)

If run is not elevated, a process is using all your processor time.
If run queue <=2.5 and CPU bound, usually a run away process or memory
leak.

If disk bound you will have the wa column >= 40%

If you have a high PI and PO it is likely you need more RAM

Check the fre column for how much free memory is available to the system.

on iostat 5 5 if there is a lot of activity on one disk, try to spread the data over
multiple drives.

-----------------


 
Deak,

Looking at the time active colume under iostat you need to lok at hdisk5 and hdisk8, the other disks apart from hdisk0 have very small stats.

Look at moving the /m4 filesystem over > 1 spindle I assume that 5 and 8 are the primary and the mirrored copy. I would look to create a new lv over 2 disks or greater and then mirror to 2 more to ensure availability.

You also have a lot of paging spaces of differing sizes, they should all be the same as paging is done in s round robin fashion in 4k pages, this means that some of the smaller paging spaces are thrashed before the larger ones are even any percentage used.

Just me 2 cents worth

PSD
IBM Certified Specialist - AIX V4.3 Systems Support
IBM Certified Specialist - AIX V4 HACMP
 
Thanks for the input. I thought about moving info from /m4 Filesystem but was not sure if this would truely help out or not. I will take a look at the vmstat & see what jumps out at me thanks again for the information. aixqueen thanks for the details I'll take a look for that book on this.
 
I have the luxury of using lots of DASD, but we like to distribute &quot;hot&quot; lv across multiple disks using the maximum allocation policy. Assuming IO is your bottleneck. iostat 5 5 will help you figure out DASD usage, as mentioned earlier.

If you are running Oracle or something that uses a &quot;buffer&quot; you should make sure the buffer size does not exceed real RAM size, otherwise you are defeating the purpose of the buffer and will see excessive paging.

My 2 cents.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top