Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Socket Processes

Status
Not open for further replies.

costiles

Technical User
Jun 14, 2005
117
US
In our application we use sockets to address a Universe database. The sockets are all owned by one user. We have the number of processes per user set high so as to allow these sockets to be kicked off by one user. Up until we get 50 processes accessing the database everything is fine. Above 50 and the performance starts to really begin to bog down. It appears to be I/O that is causing the problem. Anyone know of any parameters that might be tweaked to allow the processes to get needed resources once the seemingly 50 process limit is reached?
 
Khalid,
I thought the p column under the vmstat -I = was for number of threads waiti ng to write to raw devices. I have no raw devices. Having said that - I am sure there are other things to be learned from the -I option.
I will try those things. Thanks.
Clay
 
Before the test:
Thu Apr 5 15:40:43 CDT 2007
1003520 memory pages
960302 lruable pages
3957 free pages
1 memory pools
226974 pinned pages
80.0 maxpin percentage
35.0 minperm percentage
65.0 maxperm percentage
56.2 numperm percentage
540230 file pages
0.0 compressed percentage
0 compressed pages
56.2 numclient percentage
65.0 maxclient percentage
540230 client pages
0 remote pageouts scheduled
0 pending disk I/Os blocked with no pbuf
0 paging space I/Os blocked with no psbuf
2740 filesystem I/Os blocked with no fsbuf
0 client filesystem I/Os blocked with no fsbuf
2135 external pager filesystem I/Os blocked with no fsbuf
0 Virtualized Partition Memory Page Faults
0.00 Time resolving virtualized partition memory page faults
vgname = rootvg
pv_pbuf_count = 512
total_vg_pbufs = 1024
max_vg_pbuf_count = 16384
pervg_blocked_io_count = 0
pv_min_pbuf = 1024
global_blocked_io_count = 0

Kernel malloc statistics:

******* CPU 0 *******
By size inuse calls failed delayed free hiwat freed
32 41 50 0 0 87 2508 0
64 174 20131 0 2 82 2508 0
128 290 6883 0 9 286 1254 0
256 236 35896 0 13 532 2508 0
512 1305 480266 0 277 975 3135 0
1024 100 4024 0 41 64 1254 0
2048 2339 31351 0 1086 35 1881 0
4096 68 353 0 21 14 627 0
8192 1 631 0 30 20 313 0
16384 512 1750 0 85 89 156 0
32768 0 4 0 2 4 78 0
65536 1 80 0 9 10 78 0
131072 2 2 0 0 49 98 0


******* CPU 1 *******
By size inuse calls failed delayed free hiwat freed
64 2 8102 0 0 62 2508 0
128 3 7 0 0 29 1254 0
256 0 6193 0 0 144 2508 0
512 123 167536 0 5 213 3135 0
1024 0 58 0 3 12 1254 0
2048 2 16131 0 0 100 1881 0
4096 0 56 0 3 3 627 0
8192 0 7 0 3 3 313 0
16384 0 446 0 17 123 156 0
32768 0 14 0 2 4 78 0
65536 0 58 0 7 12 78 0
131072 0 0 0 0 49 98 0


******* CPU 2 *******
By size inuse calls failed delayed free hiwat freed
64 136 16642 0 1 56 2508 0
128 9 736 0 0 23 1254 0
256 2 12830 0 0 142 2508 0
512 149 455038 0 13 363 3135 0
1024 9 2980 0 9 31 1254 0
2048 246 27196 0 30 12 1881 0
4096 0 279 0 9 14 627 0
8192 0 599 0 24 23 313 0
16384 0 1301 0 32 141 156 50
32768 0 5 0 2 23 78 0
65536 0 82 0 6 12 78 0
131072 0 0 0 0 11 22 0


******* CPU 3 *******
By size inuse calls failed delayed free hiwat freed
64 1 8431 0 0 63 2508 0
128 1 15 0 0 31 1254 0
256 2 5197 0 0 126 2508 0
512 121 163773 0 2 183 3135 0
1024 0 45 0 2 8 1254 0
2048 2 16801 0 0 70 1881 0
4096 0 83 0 2 8 627 0
8192 0 9 0 4 3 313 0
16384 0 477 0 18 101 156 0
32768 0 10 0 1 1 78 0
65536 0 51 0 5 6 78 0
131072 0 0 0 0 49 98 0


******* CPU 4 *******
By size inuse calls failed delayed free hiwat freed
32 5 5 0 0 123 2508 0
64 217 20634 0 2 39 2508 0
128 118 1866 0 3 42 1254 0
256 42 11810 0 1 118 2508 0
512 147 602064 0 254 1909 3135 0
1024 77 4495 0 18 11 1254 0
2048 368 32230 0 1024 1512 1881 84
4096 0 315 0 0 73 627 0
8192 3 704 0 13 29 313 0
16384 0 1943 0 83 156 156 480
32768 0 10 0 1 36 78 0
65536 0 91 0 6 17 78 0
131072 0 0 0 0 69 98 0


******* CPU 5 *******
By size inuse calls failed delayed free hiwat freed
64 1 9099 0 0 63 2508 0
128 0 8 0 0 32 1254 0
256 0 2977 0 0 160 2508 0
512 130 181013 0 5 430 3135 0
1024 0 66 0 2 8 1254 0
2048 2 18142 0 0 86 1881 0
4096 0 91 0 0 0 627 0
8192 0 7 0 3 0 313 0
16384 0 524 0 13 84 156 0
32768 0 9 0 1 2 78 0
65536 0 50 0 4 6 78 0
131072 0 0 0 0 21 43 0


******* CPU 6 *******
By size inuse calls failed delayed free hiwat freed
32 63 64 0 0 65 2508 0
64 162 15115 0 1 30 2508 0
128 51 1996 0 2 109 1254 0
256 23 17097 0 2 153 2508 0
512 176 461502 0 5 296 3135 0
1024 37 2852 0 16 27 1254 0
2048 241 24387 0 38 11 1881 0
4096 1 335 0 16 14 627 0
8192 2 587 0 21 22 313 0
16384 8 1233 0 18 87 156 0
32768 0 4 0 2 3 78 0
65536 0 83 0 8 13 78 0
131072 0 0 0 0 49 98 0


******* CPU 7 *******
By size inuse calls failed delayed free hiwat freed
64 0 8198 0 0 64 2508 0
128 1 11 0 0 31 1254 0
256 0 5183 0 0 160 2508 0
512 148 164200 0 6 172 3135 0
1024 0 38 0 2 8 1254 0
2048 0 16344 0 0 82 1881 0
4096 0 83 0 3 5 627 0
8192 0 7 0 3 2 313 0
16384 0 480 0 15 90 156 0
32768 0 10 0 1 2 78 0
65536 0 52 0 4 8 78 0
131072 0 0 0 0 49 98 0

Streams mblk statistic failures:
0 high priority mblk failures
0 medium priority mblk failures
0 low priority mblk failures

System configuration: lcpu=8 drives=9 paths=8 vdisks=0

aio: avgc avfc maxg maif maxr avg-cpu: % user % sys % idle % iowait
0 0 0 0 4096 1.0 5.6 93.4 0.0

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk6 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk4 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0

aio: avgc avfc maxg maif maxr avg-cpu: % user % sys % idle % iowait
0 0 0 0 4096 0.3 1.3 98.4 0.0

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk6 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk4 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0

aio: avgc avfc maxg maif maxr avg-cpu: % user % sys % idle % iowait
0 0 0 0 4096 0.9 4.3 94.8 0.0

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk6 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk4 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0

aio: avgc avfc maxg maif maxr avg-cpu: % user % sys % idle % iowait
0 0 0 0 4096 1.0 5.3 93.7 0.0

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk6 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk4 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0

aio: avgc avfc maxg maif maxr avg-cpu: % user % sys % idle % iowait
0 0 0 0 4096 1.2 5.6 93.2 0.0

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk6 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk4 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0
During the test:
Thu Apr 5 16:11:03 CDT 2007
1003520 memory pages
960302 lruable pages
3115 free pages
1 memory pools
247571 pinned pages
80.0 maxpin percentage
35.0 minperm percentage
65.0 maxperm percentage
45.2 numperm percentage
434621 file pages
0.0 compressed percentage
0 compressed pages
45.2 numclient percentage
65.0 maxclient percentage
434621 client pages
0 remote pageouts scheduled
0 pending disk I/Os blocked with no pbuf
0 paging space I/Os blocked with no psbuf
2740 filesystem I/Os blocked with no fsbuf
0 client filesystem I/Os blocked with no fsbuf
2135 external pager filesystem I/Os blocked with no fsbuf
0 Virtualized Partition Memory Page Faults
0.00 Time resolving virtualized partition memory page faults
vgname = rootvg
pv_pbuf_count = 512
total_vg_pbufs = 1024
max_vg_pbuf_count = 16384
pervg_blocked_io_count = 0
pv_min_pbuf = 1024
global_blocked_io_count = 0

Kernel malloc statistics:

******* CPU 0 *******
By size inuse calls failed delayed free hiwat freed
32 43 52 0 0 85 2508 0
64 200 23733 0 2 56 2508 0
128 309 8406 0 9 267 1254 0
256 250 42420 0 13 518 2508 0
512 1362 564362 0 277 918 3135 0
1024 120 4099 0 41 44 1254 0
2048 2383 37920 0 1086 77 1881 0
4096 73 498 0 26 6 627 0
8192 9 729 0 40 16 313 0
16384 528 1996 0 85 61 156 0
32768 1 11 0 2 3 78 0
65536 1 104 0 9 10 78 0
131072 2 2 0 0 49 98 0


******* CPU 1 *******
By size inuse calls failed delayed free hiwat freed
64 16 10437 0 0 112 2508 0
128 10 27 0 0 22 1254 0
256 0 6642 0 0 160 2508 0
512 121 192838 0 5 327 3135 0
1024 3 63 0 3 9 1254 0
2048 31 20758 0 0 99 1881 0
4096 4 86 0 7 1 627 0
8192 6 29 0 14 13 313 0
16384 4 562 0 17 103 156 0
32768 0 18 0 2 4 78 0
65536 0 69 0 7 12 78 0
131072 0 0 0 0 49 98 0


******* CPU 2 *******
By size inuse calls failed delayed free hiwat freed
64 162 20001 0 1 30 2508 0
128 25 1698 0 0 39 1254 0
256 2 20946 0 0 382 2508 0
512 232 547929 0 13 600 3135 0
1024 13 3005 0 9 27 1254 0
2048 299 33838 0 30 61 1881 0
4096 6 421 0 15 9 627 0
8192 6 711 0 42 27 313 0
16384 15 1552 0 32 94 156 50
32768 1 8 0 2 22 78 0
65536 2 106 0 6 10 78 0
131072 0 0 0 0 11 22 0


******* CPU 3 *******
By size inuse calls failed delayed free hiwat freed
64 16 10777 0 0 48 2508 0
128 10 41 0 0 22 1254 0
256 2 5971 0 0 222 2508 0
512 138 188341 0 2 278 3135 0
1024 11 68 0 6 13 1254 0
2048 34 21486 0 0 74 1881 0
4096 2 115 0 4 2 627 0
8192 3 38 0 14 5 313 0
16384 15 608 0 18 74 156 0
32768 0 12 0 1 1 78 0
65536 0 54 0 5 6 78 0
131072 0 0 0 0 49 98 0


******* CPU 4 *******
By size inuse calls failed delayed free hiwat freed
32 5 5 0 0 123 2508 0
64 244 24136 0 2 76 2508 0
128 124 2858 0 3 36 1254 0
256 41 21897 0 1 375 2508 0
512 233 718017 0 254 1823 3135 0
1024 81 4539 0 18 7 1254 0
2048 424 38999 0 1024 1456 1881 84
4096 7 519 0 0 49 627 0
8192 17 805 0 13 15 313 0
16384 15 2225 0 83 141 156 480
32768 0 15 0 1 36 78 0
65536 3 117 0 6 14 78 0
131072 0 0 0 0 69 98 0


******* CPU 5 *******
By size inuse calls failed delayed free hiwat freed
64 23 11629 0 0 41 2508 0
128 12 116 0 0 20 1254 0
256 2 3515 0 0 158 2508 0
512 140 209150 0 5 420 3135 0
1024 3 72 0 2 5 1254 0
2048 49 23195 0 0 51 1881 0
4096 4 139 0 5 10 627 0
8192 6 40 0 15 16 313 0
16384 11 657 0 13 57 156 0
32768 0 11 0 1 2 78 0
65536 0 57 0 4 6 78 0
131072 0 0 0 0 21 43 0


******* CPU 6 *******
By size inuse calls failed delayed free hiwat freed
32 63 64 0 0 65 2508 0
64 185 18275 0 1 71 2508 0
128 61 2916 0 2 99 1254 0
256 25 24410 0 2 343 2508 0
512 215 553461 0 5 529 3135 0
1024 42 2882 0 16 22 1254 0
2048 291 30473 0 38 95 1881 0
4096 7 457 0 24 6 627 0
8192 5 674 0 36 11 313 0
16384 15 1491 0 18 56 156 0
32768 2 9 0 2 1 78 0
65536 3 101 0 8 10 78 0
131072 0 0 0 0 49 98 0


******* CPU 7 *******
By size inuse calls failed delayed free hiwat freed
64 17 10444 0 0 47 2508 0
128 7 50 0 0 25 1254 0
256 0 5641 0 0 160 2508 0
512 141 190167 0 6 275 3135 0
1024 3 47 0 2 5 1254 0
2048 36 20833 0 0 68 1881 0
4096 4 127 0 12 2 627 0
8192 4 48 0 16 14 313 0
16384 12 622 0 15 64 156 0
32768 0 11 0 1 2 78 0
65536 0 53 0 4 8 78 0
131072 0 0 0 0 49 98 0

Streams mblk statistic failures:
0 high priority mblk failures
0 medium priority mblk failures
0 low priority mblk failures

System configuration: lcpu=8 drives=9 paths=8 vdisks=0

aio: avgc avfc maxg maif maxr avg-cpu: % user % sys % idle % iowait
0 0 0 0 4096 23.1 8.9 49.5 18.5

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk7 37.0 298.0 73.0 88 210
hdisk5 18.0 182.0 46.0 140 42
hdisk3 85.0 554.0 146.0 64 490
hdisk1 0.0 0.0 0.0 0 0
hdisk6 34.0 262.0 62.0 104 158
hdisk2 91.0 571.0 151.0 52 519
hdisk4 53.0 298.0 82.0 184 114
cd0 0.0 0.0 0.0 0 0

aio: avgc avfc maxg maif maxr avg-cpu: % user % sys % idle % iowait
0 0 0 0 4096 22.7 9.8 53.2 14.3

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 10.0 92.0 19.0 0 92
hdisk7 28.0 263.0 57.0 152 111
hdisk5 24.0 198.0 51.0 140 58
hdisk3 85.0 614.0 157.0 92 522
hdisk1 8.0 92.0 19.0 0 92
hdisk6 24.0 206.0 47.0 152 54
hdisk2 72.0 563.0 143.0 40 523
hdisk4 27.0 225.0 54.0 104 121
cd0 0.0 0.0 0.0 0 0

aio: avgc avfc maxg maif maxr avg-cpu: % user % sys % idle % iowait
0 0 0 0 4096 29.1 9.5 49.6 11.8

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk7 19.8 155.1 39.5 116 41
hdisk5 17.8 177.8 39.5 172 8
hdisk3 85.9 593.6 149.1 80 521
hdisk1 0.0 0.0 0.0 0 0
hdisk6 18.8 147.2 38.5 128 21
hdisk2 76.0 561.0 140.2 48 520
hdisk4 28.6 206.4 51.4 180 29
cd0 0.0 0.0 0.0 0 0

aio: avgc avfc maxg maif maxr avg-cpu: % user % sys % idle % iowait
0 0 0 0 4096 31.4 9.9 48.4 10.4

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 2.0 12.0 3.0 0 12
hdisk7 13.0 112.0 28.0 112 0
hdisk5 28.0 180.0 45.0 164 16
hdisk3 85.0 601.0 151.0 88 513
hdisk1 1.0 12.0 3.0 0 12
hdisk6 13.0 148.0 36.0 148 0
hdisk2 74.0 568.0 143.0 72 496
hdisk4 18.0 172.0 42.0 172 0
cd0 0.0 0.0 0.0 0 0

aio: avgc avfc maxg maif maxr avg-cpu: % user % sys % idle % iowait
0 0 0 0 4096 33.1 16.3 39.9 10.7

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 3.0 196.0 9.0 4 192
hdisk7 16.0 121.0 33.0 96 25
hdisk5 16.0 164.0 34.0 164 0
hdisk3 92.0 617.0 153.0 112 505
hdisk1 3.0 192.0 8.0 0 192
hdisk6 15.0 132.0 30.0 132 0
hdisk2 79.0 569.0 145.0 72 497
hdisk4 33.0 241.0 61.0 216 25
cd0 0.0 0.0 0.0 0 0

System configuration: lcpu=8 mem=3920MB

After the test:
Thu Apr 5 16:22:52 CDT 2007
1003520 memory pages
960302 lruable pages
8496 free pages
1 memory pools
241157 pinned pages
80.0 maxpin percentage
35.0 minperm percentage
65.0 maxperm percentage
44.1 numperm percentage
424109 file pages
0.0 compressed percentage
0 compressed pages
44.1 numclient percentage
65.0 maxclient percentage
424109 client pages
0 remote pageouts scheduled
0 pending disk I/Os blocked with no pbuf
0 paging space I/Os blocked with no psbuf
2740 filesystem I/Os blocked with no fsbuf
0 client filesystem I/Os blocked with no fsbuf
2135 external pager filesystem I/Os blocked with no fsbuf
0 Virtualized Partition Memory Page Faults
0.00 Time resolving virtualized partition memory page faults
vgname = rootvg
pv_pbuf_count = 512
total_vg_pbufs = 1024
max_vg_pbuf_count = 16384
pervg_blocked_io_count = 0
pv_min_pbuf = 1024
global_blocked_io_count = 0

Kernel malloc statistics:

******* CPU 0 *******
By size inuse calls failed delayed free hiwat freed
32 43 52 0 0 85 2508 0
64 185 26010 0 2 71 2508 0
128 305 9501 0 9 271 1254 0
256 247 46697 0 13 521 2508 0
512 1351 606291 0 277 929 3135 0
1024 128 4162 0 41 36 1254 0
2048 2352 42315 0 1086 112 1881 0
4096 70 545 0 26 7 627 0
8192 8 730 0 40 17 313 0
16384 521 2033 0 85 68 156 0
32768 0 11 0 2 4 78 0
65536 6 110 0 9 5 78 0
131072 2 2 0 0 49 98 0


******* CPU 1 *******
By size inuse calls failed delayed free hiwat freed
64 3 11605 0 0 125 2508 0
128 11 160 0 0 21 1254 0
256 1 7105 0 0 159 2508 0
512 97 202137 0 5 543 3135 0
1024 0 63 0 3 12 1254 0
2048 3 23088 0 0 127 1881 0
4096 0 88 0 7 5 627 0
8192 1 30 0 14 6 313 0
16384 2 594 0 17 105 156 0
32768 0 18 0 2 4 78 0
65536 0 69 0 7 12 78 0
131072 0 0 0 0 49 98 0


******* CPU 2 *******
By size inuse calls failed delayed free hiwat freed
64 134 22112 0 1 122 2508 0
128 24 2939 0 0 72 1254 0
256 4 34378 0 0 588 2508 0
512 202 593113 0 13 678 3135 0
1024 12 3039 0 9 28 1254 0
2048 242 37893 0 30 126 1881 0
4096 1 454 0 15 3 627 0
8192 3 712 0 42 23 313 0
16384 7 1593 0 32 102 156 50
32768 0 9 0 2 23 78 0
65536 0 107 0 6 12 78 0
131072 0 0 0 0 11 22 0


******* CPU 3 *******
By size inuse calls failed delayed free hiwat freed
64 3 12092 0 0 61 2508 0
128 8 115 0 0 24 1254 0
256 0 9778 0 0 592 2508 0
512 105 198449 0 2 487 3135 0
1024 0 68 0 6 24 1254 0
2048 6 24095 0 0 110 1881 0
4096 0 120 0 4 3 627 0
8192 1 39 0 14 1 313 0
16384 3 631 0 18 77 156 0
32768 0 12 0 1 1 78 0
65536 0 54 0 5 6 78 0
131072 0 0 0 0 49 98 0


******* CPU 4 *******
By size inuse calls failed delayed free hiwat freed
32 5 5 0 0 123 2508 0
64 211 26296 0 2 109 2508 0
128 123 4019 0 3 37 1254 0
256 35 28269 0 1 509 2508 0
512 210 772149 0 254 1846 3135 0
1024 72 4598 0 19 20 1254 0
2048 360 43136 0 1024 1520 1881 84
4096 2 587 0 0 45 627 0
8192 10 812 0 13 22 313 0
16384 14 2277 0 83 142 156 480
32768 0 15 0 1 36 78 0
65536 1 122 0 6 16 78 0
131072 0 0 0 0 69 98 0


******* CPU 5 *******
By size inuse calls failed delayed free hiwat freed
64 2 13123 0 0 62 2508 0
128 13 198 0 0 19 1254 0
256 1 3821 0 0 159 2508 0
512 158 219998 0 5 402 3135 0
1024 0 83 0 4 16 1254 0
2048 4 26182 0 0 106 1881 0
4096 0 147 0 7 11 627 0
8192 0 50 0 15 20 313 0
16384 14 705 0 13 54 156 0
32768 0 11 0 1 2 78 0
65536 0 57 0 4 6 78 0
131072 0 0 0 0 21 43 0


******* CPU 6 *******
By size inuse calls failed delayed free hiwat freed
32 64 65 0 0 64 2508 0
64 166 20320 0 1 90 2508 0
128 61 3747 0 2 99 1254 0
256 25 35263 0 2 567 2508 0
512 194 593663 0 5 950 3135 0
1024 41 2929 0 16 23 1254 0
2048 250 34160 0 38 136 1881 0
4096 4 483 0 24 3 627 0
8192 4 676 0 37 3 313 0
16384 5 1543 0 18 56 156 0
32768 0 10 0 2 3 78 0
65536 1 105 0 8 12 78 0
131072 0 0 0 0 49 98 0


******* CPU 7 *******
By size inuse calls failed delayed free hiwat freed
64 2 11674 0 0 62 2508 0
128 7 114 0 0 25 1254 0
256 1 5920 0 0 159 2508 0
512 115 198937 0 6 301 3135 0
1024 0 52 0 2 8 1254 0
2048 4 23284 0 0 100 1881 0
4096 0 130 0 12 6 627 0
8192 0 49 0 16 18 313 0
16384 2 654 0 15 74 156 0
32768 0 13 0 1 2 78 0
65536 0 57 0 4 8 78 0
131072 0 0 0 0 49 98 0

Streams mblk statistic failures:
0 high priority mblk failures
0 medium priority mblk failures
0 low priority mblk failures
 
Khalid - I realize this is a lot of data. I would be glad to send it to you by email.
 
Khalid - above is the before and a little of the during. Test was 20 Minutes: The After:Thu Apr 5 16:22:52 CDT 2007
1003520 memory pages
960302 lruable pages
8496 free pages
1 memory pools
241157 pinned pages
80.0 maxpin percentage
35.0 minperm percentage
65.0 maxperm percentage
44.1 numperm percentage
424109 file pages
0.0 compressed percentage
0 compressed pages
44.1 numclient percentage
65.0 maxclient percentage
424109 client pages
0 remote pageouts scheduled
0 pending disk I/Os blocked with no pbuf
0 paging space I/Os blocked with no psbuf
2740 filesystem I/Os blocked with no fsbuf
0 client filesystem I/Os blocked with no fsbuf
2135 external pager filesystem I/Os blocked with no fsbuf
0 Virtualized Partition Memory Page Faults
0.00 Time resolving virtualized partition memory page faults
vgname = rootvg
pv_pbuf_count = 512
total_vg_pbufs = 1024
max_vg_pbuf_count = 16384
pervg_blocked_io_count = 0
pv_min_pbuf = 1024
global_blocked_io_count = 0

Kernel malloc statistics:

******* CPU 0 *******
By size inuse calls failed delayed free hiwat freed
32 43 52 0 0 85 2508 0
64 185 26010 0 2 71 2508 0
128 305 9501 0 9 271 1254 0
256 247 46697 0 13 521 2508 0
512 1351 606291 0 277 929 3135 0
1024 128 4162 0 41 36 1254 0
2048 2352 42315 0 1086 112 1881 0
4096 70 545 0 26 7 627 0
8192 8 730 0 40 17 313 0
16384 521 2033 0 85 68 156 0
32768 0 11 0 2 4 78 0
65536 6 110 0 9 5 78 0
131072 2 2 0 0 49 98 0


******* CPU 1 *******
By size inuse calls failed delayed free hiwat freed
64 3 11605 0 0 125 2508 0
128 11 160 0 0 21 1254 0
256 1 7105 0 0 159 2508 0
512 97 202137 0 5 543 3135 0
1024 0 63 0 3 12 1254 0
2048 3 23088 0 0 127 1881 0
4096 0 88 0 7 5 627 0
8192 1 30 0 14 6 313 0
16384 2 594 0 17 105 156 0
32768 0 18 0 2 4 78 0
65536 0 69 0 7 12 78 0
131072 0 0 0 0 49 98 0


******* CPU 2 *******
By size inuse calls failed delayed free hiwat freed
64 134 22112 0 1 122 2508 0
128 24 2939 0 0 72 1254 0
256 4 34378 0 0 588 2508 0
512 202 593113 0 13 678 3135 0
1024 12 3039 0 9 28 1254 0
2048 242 37893 0 30 126 1881 0
4096 1 454 0 15 3 627 0
8192 3 712 0 42 23 313 0
16384 7 1593 0 32 102 156 50
32768 0 9 0 2 23 78 0
65536 0 107 0 6 12 78 0
131072 0 0 0 0 11 22 0


******* CPU 3 *******
By size inuse calls failed delayed free hiwat freed
64 3 12092 0 0 61 2508 0
128 8 115 0 0 24 1254 0
256 0 9778 0 0 592 2508 0
512 105 198449 0 2 487 3135 0
1024 0 68 0 6 24 1254 0
2048 6 24095 0 0 110 1881 0
4096 0 120 0 4 3 627 0
8192 1 39 0 14 1 313 0
16384 3 631 0 18 77 156 0
32768 0 12 0 1 1 78 0
65536 0 54 0 5 6 78 0
131072 0 0 0 0 49 98 0


******* CPU 4 *******
By size inuse calls failed delayed free hiwat freed
32 5 5 0 0 123 2508 0
64 211 26296 0 2 109 2508 0
128 123 4019 0 3 37 1254 0
256 35 28269 0 1 509 2508 0
512 210 772149 0 254 1846 3135 0
1024 72 4598 0 19 20 1254 0
2048 360 43136 0 1024 1520 1881 84
4096 2 587 0 0 45 627 0
8192 10 812 0 13 22 313 0
16384 14 2277 0 83 142 156 480
32768 0 15 0 1 36 78 0
65536 1 122 0 6 16 78 0
131072 0 0 0 0 69 98 0


******* CPU 5 *******
By size inuse calls failed delayed free hiwat freed
64 2 13123 0 0 62 2508 0
128 13 198 0 0 19 1254 0
256 1 3821 0 0 159 2508 0
512 158 219998 0 5 402 3135 0
1024 0 83 0 4 16 1254 0
2048 4 26182 0 0 106 1881 0
4096 0 147 0 7 11 627 0
8192 0 50 0 15 20 313 0
16384 14 705 0 13 54 156 0
32768 0 11 0 1 2 78 0
65536 0 57 0 4 6 78 0
131072 0 0 0 0 21 43 0


******* CPU 6 *******
By size inuse calls failed delayed free hiwat freed
32 64 65 0 0 64 2508 0
64 166 20320 0 1 90 2508 0
128 61 3747 0 2 99 1254 0
256 25 35263 0 2 567 2508 0
512 194 593663 0 5 950 3135 0
1024 41 2929 0 16 23 1254 0
2048 250 34160 0 38 136 1881 0
4096 4 483 0 24 3 627 0
8192 4 676 0 37 3 313 0
16384 5 1543 0 18 56 156 0
32768 0 10 0 2 3 78 0
65536 1 105 0 8 12 78 0
131072 0 0 0 0 49 98 0


******* CPU 7 *******
By size inuse calls failed delayed free hiwat freed
64 2 11674 0 0 62 2508 0
128 7 114 0 0 25 1254 0
256 1 5920 0 0 159 2508 0
512 115 198937 0 6 301 3135 0
1024 0 52 0 2 8 1254 0
2048 4 23284 0 0 100 1881 0
4096 0 130 0 12 6 627 0
8192 0 49 0 16 18 313 0
16384 2 654 0 15 74 156 0
32768 0 13 0 1 2 78 0
65536 0 57 0 4 8 78 0
131072 0 0 0 0 49 98 0

Streams mblk statistic failures:
0 high priority mblk failures
0 medium priority mblk failures
0 low priority mblk failures

System configuration: lcpu=8 drives=9 paths=8 vdisks=0

aio: avgc avfc maxg maif maxr avg-cpu: % user % sys % idle % iowait
0 0 0 0 4096 1.0 5.3 93.7 0.0

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk6 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk4 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0

aio: avgc avfc maxg maif maxr avg-cpu: % user % sys % idle % iowait
0 0 0 0 4096 1.0 5.7 93.3 0.0

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk6 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk4 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0

aio: avgc avfc maxg maif maxr avg-cpu: % user % sys % idle % iowait
0 0 0 0 4096 1.9 2.3 95.8 0.0

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk3 20.0 140.2 35.0 0 140
hdisk1 0.0 0.0 0.0 0 0
hdisk6 0.0 0.0 0.0 0 0
hdisk2 30.0 140.2 35.0 0 140
hdisk4 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0

aio: avgc avfc maxg maif maxr avg-cpu: % user % sys % idle % iowait
0 0 0 0 4096 1.2 4.2 94.6 0.0

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk3 0.0 68.0 17.0 0 68
hdisk1 0.0 0.0 0.0 0 0
hdisk6 0.0 0.0 0.0 0 0
hdisk2 0.0 68.0 17.0 0 68
hdisk4 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0

aio: avgc avfc maxg maif maxr avg-cpu: % user % sys % idle % iowait
0 0 0 0 4096 2.2 6.3 89.9 1.7

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 10.0 105.0 18.0 0 105
hdisk7 10.0 12.0 4.0 0 12
hdisk5 0.0 0.0 0.0 0 0
hdisk3 20.0 100.0 26.0 0 100
hdisk1 10.0 105.0 18.0 0 105
hdisk6 0.0 8.0 2.0 0 8
hdisk2 20.0 104.0 27.0 0 104
hdisk4 10.0 8.0 3.0 0 8
cd0 0.0 0.0 0.0 0 0
 
The above system appears to be normal to me!

I can see that you said this in at first when you opened this case:
"It appears to be I/O that is causing the problem"

How did you come to this conclusion at the first place?

Maybe if you showed the iostat output during the test will help making sure that there is something wrong with the I/O! vmstat will help as well (I/O waits!)

from the vmstat -v (0 pending disk I/Os blocked with no pbuf) command and lvmo -a (global_blocked_io_count = 0) seems to me that the disks buffers are being handled nicely! there is no blocked io on any of them!

I remember the first ouput you posted above (for the lvmo -a command) showing global_blocked_io_count = 368314 which indicates that there were some of the IOs blocked at some stage!

I think you are using two different servers for your testing isn't it? or you might rebooted the system and reset the statistics by that!

any way, let us return to the point where you said "It appears to be I/O that is causing the problem" and lets inditify whether this is the case and we should know what is causing this by that time! (I guess as you said its the sockets connections going high)

I would like to get the output of the following during the test:

vmstat 5
iostat -A 1 5
pstat -a | grep aios | wc -l

vmstat -v (before and after the test)

PS. sorry if this seems to be too much but performance tuning takes most of the administrators time!

Regards,
Khalid
 
Khalid,
We may have found the problem with the I/O in between test phases. We use dartmouth basic to access a user friendly language in Universe. CALLing a subroutine does not requires as much I/O as EXECUTing a subroutine. We think this may have been the problem.
One other question - from the results - it does not look to me like we are using the aio servers. Am I correct in that assessment?
 
I can't tell from the output given above whether you are using aio or not! you can check that by running

lsdev -C | grep aio

You can enable it thru: smitty aio --> Change / Show Charac. of AIO --> You will see whether it is enabled or disabled from here as well!

You will need to reboot the system after enabling the aio

as i said earlier, it will show better if you use the iostat and vmstat while you are doing the test!

Regards,
Khalid
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top