Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Oracle9i is DOG SLOW on AIX 5.2.0.5

Status
Not open for further replies.

Michelleberard

Technical User
Feb 7, 2005
10
0
0
US
Hi, I'm an Oracle DBA and we have a single, 170GB database on an AIX 5L server with 8GB RAM. The data is stored on an EMC Symm and we're using JFS2 filesystems.
There are only a handful of users on this powerful server accessing the database (which will convert to production very soon). I have optimized the init.ora parameters; Oracle's SGA is roughly 3GB. I have 12 years of Oracle DBA experience and feel almost certain it's not the database (I know, you've heard that before).
I'm suspicious of the AIX parameters, specifically minperm, maxperm, maxclient, minfree, maxfree - most of these settings are at default - and I understand that Oracle does not perform well with 20/80 minperm/maxperm. I tried tweeking it once to 10/40 and the users reported a slight improvement but some transactions were just as slow as ever. Our Sys Admin wanted these parameters changed back t their defaults. We're using asynch/io. But we're not mounting our database filesystems with direct or concurrent i/o - I'm not sure if it even matters in a SAN environment.
Our SysAdmin just applied the 5.2.0.5 patch today and we're hoping that may solve the problem but it seems a little too optimistic. Before the latest patch was applied, lsps -a showed up to 20% swap space used. It's only 3% now but the server has only been up about 10 hours.
AIX folks with Oracle experience, I'm open to any and all pointers. We have to find a solution soon!!!

 
HI,

Please post the output from the following ,taken under full load:

1.vmstat 2 10
2.topas (one screen)
3.df -k
4.vmo -a
5.lsdev -Csscsi
6.lsdev -Ccdisk
7.iostat 2 6

Thanks

Long live king Moshiach !
 
I appreciate your help and will gladly gather the statistics you request. But this is a pre-production server and it's never really under a heavy load. The absence of a heavy load has been a big issue in identifying the problem. From a pure Oracle standpoint, could the latency be due to costly disk reads because the buffer cache is not loaded due to light activity?

Here's what I am keeping a close eye on: the server was rebooted yesterday and "lsps -a" showed 3% of swap space utilized. Today we're up to 14% and this number seems to grow steadily, even with light activity. lsps -a shows the swapping HWM, right? What about vmstat - does it show HWM or paging stats for the polling period only?

By the way, we installed the ML5 patch yesterday.

nddb@npgnd1> lsps -a
Page Space Physical Volume Volume Group Size %Used Active Auto Type
hd6 hdisk0 rootvg 5120MB 14 yes yes lv

Once users start their testing today, I will gather those stats for your review. FYI - Our vendor just rebooted the server so our swapping stats are back to 3%.




Thanks again.




 
HI,
The swap space consumption is normally not a reason/indication on any performance issues.
Once you can provide the data - then one can make an intelligent guess on the real bottleneck.
Thanks

Long live king Moshiach !
 
Here's some of the results you requested. I will wait until users are testing today to get you the others....

# df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 262144 234092 11% 2410 5% /
/dev/hd2 1835008 308236 84% 30683 31% /usr
/dev/hd9var 262144 215728 18% 406 1% /var
/dev/hd3 524288 479412 9% 335 1% /tmp
/dev/hd1 262144 261008 1% 27 1% /home
/proc - - - - - /proc
/dev/hd10opt 524288 307532 42% 1014 2% /opt
/dev/smgtlv 2113536 2094576 1% 140 1% /smgthome
/dev/iohomelv 15384576 13946424 10% 25987 1% /ndiohome
/dev/basketlv 5160960 4848792 7% 1286 1% /ndiohome/ndio/Baskets
/dev/orabinlv 11468800 6333796 45% 24300 2% /nddbhome/nddb
/dev/datalv 25804800 10094512 61% 57 1% /nddbhome/nddb/DATA
/dev/lob1lv 17203200 1471420 92% 20 1% /nddbhome/nddb/DATA/Cc
iLob/1
/dev/lob2lv 17203200 1471420 92% 20 1% /nddbhome/nddb/DATA/Cc
iLob/2
/dev/lob3lv 17203200 1471420 92% 20 1% /nddbhome/nddb/DATA/Cc
iLob/3
/dev/lob4lv 17203200 1471420 92% 20 1% /nddbhome/nddb/DATA/Cc
iLob/4
/dev/lob5lv 17203200 1471420 92% 20 1% /nddbhome/nddb/DATA/Cc
iLob/5
/dev/lob6lv 17203200 1471420 92% 20 1% /nddbhome/nddb/DATA/Cc
iLob/6
/dev/lob7lv 17203200 1471420 92% 20 1% /nddbhome/nddb/DATA/Cc
iLob/7
/dev/lob8lv 17203200 1471420 92% 20 1% /nddbhome/nddb/DATA/Cc
iLob/8
/dev/lob9lv 17203200 17200244 1% 5 1% /nddbhome/nddb/DATA/Cc
iLob/9
/dev/lob10lv 17203200 17200244 1% 5 1% /nddbhome/nddb/DATA/Cc
iLob/10
/dev/redo1lv 2457600 2194744 11% 7 1% /nddbhome/nddb/DATA/Re
dologs/1
/dev/redo2lv 2457600 2325820 6% 6 1% /nddbhome/nddb/DATA/Re
dologs/2
/dev/bklv 172032000 29880708 83% 167 1% /nddbhome/nddb/Backup
/dev/vrtclv 262144 246796 6% 94 1% /vrtc
/dev/patchlv 3145728 2307468 27% 290 1% /patch


# vmo -a
memory_frames = 2097152
pinnable_frames = 1839225
maxfree = 128
minfree = 120
minperm% = 20
minperm = 392631
maxperm% = 80
maxperm = 1570527
strict_maxperm = 0
maxpin% = 80
maxpin = 1677722
maxclient% = 80
lrubucket = 131072
defps = 1
nokilluid = 0
numpsblks = 1310720
npskill = 10240
npswarn = 40960
v_pinshm = 0
pta_balance_threshold = n/a
pagecoloring = n/a
framesets = 2
mempools = 1
lgpg_size = 0
lgpg_regions = 0
num_spec_dataseg = 0
spec_dataseg_int = 512
memory_affinity = 1
htabscale = -1
force_relalias_lite = 0
relalias_percentage = 0
data_stagger_interval = 161
large_page_heap_size = 0
kernel_heap_psize = 4096
soft_min_lgpgs_vmpool = 0
vmm_fork_policy = 0
low_ps_handling = 1
mbuf_heap_psize = 4096
strict_maxclient = 1
cpu_scale_memp = 8
lru_poll_interval = 0
lru_file_repage = 1


# lsdev -Csscsi
cd0 Available 1Z-08-00-1,0 16 Bit LVD SCSI DVD-ROM Drive
hdisk0 Available 1Z-09-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 14-08-00-10,0 16 Bit LVD SCSI Disk Drive
ses0 Available 1Z-08-00-14,0 SCSI Enclosure Services Device
ses1 Available 1Z-09-00-15,0 SCSI Enclosure Services Device
ses2 Available 14-08-00-15,0 SCSI Enclosure Services Device


# lsdev -Ccdisk
hdisk0 Available 1Z-09-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 14-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk3 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk4 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk5 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk6 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk7 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk8 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk9 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk10 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk11 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk12 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk13 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk14 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk15 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk16 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk17 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk18 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk19 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk20 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk21 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk22 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk23 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk24 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk25 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk26 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk27 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk28 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk29 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk30 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk31 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk32 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk33 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk34 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk35 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk36 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk37 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk38 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk39 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk40 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk41 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk42 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk43 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk44 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk45 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk46 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk47 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk48 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk49 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk50 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk51 Available 1n-08-01 EMC Symmetrix FCP Raid1
hdisk52 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk53 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk54 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk55 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk56 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk57 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk58 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk59 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk60 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk61 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk62 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk63 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk64 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk65 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk66 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk67 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk68 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk69 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk70 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk71 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk72 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk73 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk74 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk75 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk76 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk77 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk78 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk79 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk80 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk81 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk82 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk83 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk84 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk85 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk86 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk87 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk88 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk89 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk90 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk91 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk92 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk93 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk94 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk95 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk96 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk97 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk98 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk99 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk100 Available 11-08-01 EMC Symmetrix FCP Raid1
hdisk101 Available 11-08-01 EMC Symmetrix FCP Raid1
hdiskpower0 Available 1n-08-01 PowerPath Device
hdiskpower1 Available 1n-08-01 PowerPath Device
hdiskpower2 Available 1n-08-01 PowerPath Device
hdiskpower3 Available 1n-08-01 PowerPath Device
hdiskpower4 Available 1n-08-01 PowerPath Device
hdiskpower5 Available 1n-08-01 PowerPath Device
hdiskpower6 Available 1n-08-01 PowerPath Device
hdiskpower7 Available 1n-08-01 PowerPath Device
hdiskpower8 Available 1n-08-01 PowerPath Device
hdiskpower9 Available 1n-08-01 PowerPath Device
hdiskpower10 Available 1n-08-01 PowerPath Device
hdiskpower11 Available 1n-08-01 PowerPath Device
hdiskpower12 Available 1n-08-01 PowerPath Device
hdiskpower13 Available 1n-08-01 PowerPath Device
hdiskpower14 Available 1n-08-01 PowerPath Device
hdiskpower15 Available 1n-08-01 PowerPath Device
hdiskpower16 Available 1n-08-01 PowerPath Device
hdiskpower17 Available 1n-08-01 PowerPath Device
hdiskpower18 Available 1n-08-01 PowerPath Device
hdiskpower19 Available 1n-08-01 PowerPath Device
hdiskpower20 Available 1n-08-01 PowerPath Device
hdiskpower21 Available 1n-08-01 PowerPath Device
hdiskpower22 Available 1n-08-01 PowerPath Device
hdiskpower23 Available 1n-08-01 PowerPath Device
hdiskpower24 Available 1n-08-01 PowerPath Device
hdiskpower25 Available 1n-08-01 PowerPath Device
hdiskpower26 Available 1n-08-01 PowerPath Device
hdiskpower27 Available 1n-08-01 PowerPath Device
hdiskpower28 Available 1n-08-01 PowerPath Device
hdiskpower29 Available 1n-08-01 PowerPath Device
hdiskpower30 Available 1n-08-01 PowerPath Device
hdiskpower31 Available 1n-08-01 PowerPath Device
hdiskpower32 Available 1n-08-01 PowerPath Device
hdiskpower33 Available 1n-08-01 PowerPath Device
hdiskpower34 Available 1n-08-01 PowerPath Device
hdiskpower35 Available 1n-08-01 PowerPath Device
hdiskpower36 Available 1n-08-01 PowerPath Device
hdiskpower37 Available 1n-08-01 PowerPath Device
hdiskpower38 Available 1n-08-01 PowerPath Device
hdiskpower39 Available 1n-08-01 PowerPath Device
hdiskpower40 Available 1n-08-01 PowerPath Device
hdiskpower41 Available 1n-08-01 PowerPath Device
hdiskpower42 Available 1n-08-01 PowerPath Device
hdiskpower43 Available 1n-08-01 PowerPath Device
hdiskpower44 Available 1n-08-01 PowerPath Device
hdiskpower45 Available 1n-08-01 PowerPath Device
hdiskpower46 Available 1n-08-01 PowerPath Device
hdiskpower47 Available 1n-08-01 PowerPath Device
hdiskpower48 Available 11-08-01 PowerPath Device
hdiskpower49 Available 11-08-01 PowerPath Device
 
We have about 1/2 dozen users on the test system. This is our HWM right now but the user count will increase significantly in the coming weeks.


System configuration: lcpu=4 disk=153

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.3 48.4 12.0 1.7 84.8 1.5
" Disk history since boot not available. "


tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 0.0 31.8 1.8 65.1 1.3

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.5 1.9 0.5 4 0
hdisk2 0.0 0.0 0.0 0 0
hdisk4 0.0 0.0 0.0 0 0
hdisk5 0.0 3.9 1.0 0 8
hdisk6 0.0 0.0 0.0 0 0
hdisk7 0.0 15.5 2.4 0 32
hdisk8 0.0 0.0 0.0 0 0
hdisk9 0.0 3.9 0.5 0 8
hdisk10 1.0 3.9 0.5 8 0
hdisk11 0.0 0.0 0.0 0 0
hdisk12 0.0 0.0 0.0 0 0
hdisk14 0.0 0.0 0.0 0 0
hdisk15 0.0 0.0 0.0 0 0
hdisk16 0.0 0.0 0.0 0 0
hdisk17 0.0 0.0 0.0 0 0
hdisk18 0.0 3.9 0.5 0 8
hdisk19 0.0 0.0 0.0 0 0
hdisk20 0.0 0.0 0.0 0 0
hdisk21 0.0 0.0 0.0 0 0
hdisk22 0.0 0.0 0.0 0 0
hdisk23 0.0 0.0 0.0 0 0
hdisk24 0.0 0.0 0.0 0 0
hdisk25 0.0 0.0 0.0 0 0
hdisk26 0.0 0.0 0.0 0 0
hdisk27 0.0 0.0 0.0 0 0
hdisk28 0.0 0.0 0.0 0 0
hdisk29 0.5 3.9 0.5 8 0
hdisk30 0.0 0.0 0.0 0 0
hdisk31 0.0 0.0 0.0 0 0
hdisk32 0.0 0.0 0.0 0 0
hdisk33 0.0 0.0 0.0 0 0
hdisk34 0.0 0.0 0.0 0 0
hdisk35 0.0 0.0 0.0 0 0
hdisk36 0.0 0.0 0.0 0 0
hdisk37 0.0 0.0 0.0 0 0
hdisk38 0.0 0.0 0.0 0 0
hdisk40 0.0 0.0 0.0 0 0
hdisk41 0.0 0.0 0.0 0 0
hdisk42 0.0 0.0 0.0 0 0
hdisk43 0.0 0.0 0.0 0 0
hdisk44 0.0 0.0 0.0 0 0
hdisk45 0.0 0.0 0.0 0 0
hdisk46 0.0 0.0 0.0 0 0
hdisk47 0.0 0.0 0.0 0 0
hdisk48 0.0 0.0 0.0 0 0
hdisk49 0.0 0.0 0.0 0 0
hdisk50 0.0 0.0 0.0 0 0
hdisk51 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
hdisk100 0.0 0.0 0.0 0 0
hdisk101 0.0 0.0 0.0 0 0
hdisk52 0.0 0.0 0.0 0 0
hdisk54 0.0 0.0 0.0 0 0
hdisk55 0.0 27.1 2.4 0 56
hdisk56 0.0 0.0 0.0 0 0
hdisk57 0.0 0.0 0.0 0 0
hdisk58 0.0 0.0 0.0 0 0
hdisk59 0.0 0.0 0.0 0 0
hdisk60 0.0 0.0 0.0 0 0
hdisk61 0.0 0.0 0.0 0 0
hdisk62 0.0 0.0 0.0 0 0
hdisk63 0.0 0.0 0.0 0 0
hdisk64 0.0 0.0 0.0 0 0
hdisk65 0.0 0.0 0.0 0 0
hdisk66 0.0 3.9 0.5 8 0
hdisk67 0.0 0.0 0.0 0 0
hdisk68 0.0 0.0 0.0 0 0
hdisk69 0.0 0.0 0.0 0 0
hdisk71 0.0 0.0 0.0 0 0
hdisk72 0.0 0.0 0.0 0 0
hdisk73 1.0 3.9 0.5 8 0
hdisk74 0.0 0.0 0.0 0 0
hdisk75 0.0 0.0 0.0 0 0
hdisk76 0.0 0.0 0.0 0 0
hdisk77 0.0 3.9 0.5 0 8
hdisk79 0.0 0.0 0.0 0 0
hdisk80 0.0 0.0 0.0 0 0
hdisk81 0.0 0.0 0.0 0 0
hdisk82 0.0 0.0 0.0 0 0
hdisk83 0.0 0.0 0.0 0 0
hdisk84 0.0 0.0 0.0 0 0
hdisk85 0.0 0.0 0.0 0 0
hdisk86 0.0 0.0 0.0 0 0
hdisk87 0.0 0.0 0.0 0 0
hdisk88 0.0 0.0 0.0 0 0
hdisk89 0.0 0.0 0.0 0 0
hdisk90 0.0 0.0 0.0 0 0
hdisk91 0.0 0.0 0.0 0 0
hdisk92 0.0 0.0 0.0 0 0
hdisk93 0.0 0.0 0.0 0 0
hdisk94 0.0 0.0 0.0 0 0
hdisk95 0.0 0.0 0.0 0 0
hdisk96 0.0 0.0 0.0 0 0
hdisk97 0.0 0.0 0.0 0 0
hdisk98 0.0 0.0 0.0 0 0
hdisk99 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0
hdisk13 1.0 3.9 0.5 8 0
hdisk39 0.0 0.0 0.0 0 0
hdisk53 0.0 5.8 1.4 0 12
hdisk70 0.0 0.0 0.0 0 0
hdisk78 0.0 0.0 0.0 0 0
hdiskpower0 0.0 0.0 0.0 0 0
hdiskpower1 0.0 9.7 2.4 0 20
hdiskpower2 0.0 0.0 0.0 0 0
hdiskpower3 0.0 42.5 4.8 0 88
hdiskpower4 0.0 0.0 0.0 0 0
hdiskpower5 0.0 3.9 0.5 0 8
hdiskpower6 1.0 3.9 0.5 8 0
hdiskpower7 0.0 0.0 0.0 0 0
hdiskpower8 0.0 0.0 0.0 0 0
hdiskpower9 1.0 3.9 0.5 8 0
hdiskpower10 0.0 0.0 0.0 0 0
hdiskpower11 0.0 0.0 0.0 0 0
hdiskpower12 0.0 0.0 0.0 0 0
hdiskpower13 0.0 0.0 0.0 0 0
hdiskpower14 0.0 7.7 1.0 8 8
hdiskpower15 0.0 0.0 0.0 0 0
hdiskpower16 0.0 0.0 0.0 0 0
hdiskpower17 0.0 0.0 0.0 0 0
hdiskpower18 0.0 0.0 0.0 0 0
hdiskpower19 0.0 0.0 0.0 0 0
hdiskpower20 0.0 0.0 0.0 0 0
hdiskpower21 1.0 3.9 0.5 8 0
hdiskpower22 0.0 0.0 0.0 0 0
hdiskpower23 0.0 0.0 0.0 0 0
hdiskpower24 0.0 0.0 0.0 0 0
hdiskpower25 0.5 7.7 1.0 8 8
hdiskpower26 0.0 0.0 0.0 0 0
hdiskpower27 0.0 0.0 0.0 0 0
hdiskpower28 0.0 0.0 0.0 0 0
hdiskpower29 0.0 0.0 0.0 0 0
hdiskpower30 0.0 0.0 0.0 0 0
hdiskpower31 0.0 0.0 0.0 0 0
hdiskpower32 0.0 0.0 0.0 0 0
hdiskpower33 0.0 0.0 0.0 0 0
hdiskpower34 0.0 0.0 0.0 0 0
hdiskpower35 0.0 0.0 0.0 0 0
hdiskpower36 0.0 0.0 0.0 0 0
hdiskpower37 0.0 0.0 0.0 0 0
hdiskpower38 0.0 0.0 0.0 0 0
hdiskpower39 0.0 0.0 0.0 0 0
hdiskpower40 0.0 0.0 0.0 0 0
hdiskpower41 0.0 0.0 0.0 0 0
hdiskpower42 0.0 0.0 0.0 0 0
hdiskpower43 0.0 0.0 0.0 0 0
hdiskpower44 0.0 0.0 0.0 0 0
hdiskpower45 0.0 0.0 0.0 0 0
hdiskpower46 0.0 0.0 0.0 0 0
hdiskpower47 0.0 0.0 0.0 0 0
hdiskpower48 0.0 0.0 0.0 0 0
hdiskpower49 0.0 0.0 0.0 0 0

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 0.0 44.9 1.0 53.5 0.6

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 1.5 12.0 3.0 4 20
hdisk2 0.0 0.0 0.0 0 0
hdisk4 0.0 0.0 0.0 0 0
hdisk5 0.0 8.0 2.0 0 16
hdisk6 0.0 0.0 0.0 0 0
hdisk7 0.0 4.0 1.0 0 8
hdisk8 0.0 2.0 0.5 0 4
hdisk9 0.0 0.0 0.0 0 0
hdisk10 0.0 0.0 0.0 0 0
hdisk11 0.0 0.0 0.0 0 0
hdisk12 0.0 0.0 0.0 0 0
hdisk14 0.0 0.0 0.0 0 0
hdisk15 0.0 0.0 0.0 0 0
hdisk16 0.0 0.0 0.0 0 0
hdisk17 0.0 0.0 0.0 0 0
hdisk18 1.0 4.0 0.5 8 0
hdisk19 0.0 0.0 0.0 0 0
hdisk20 0.0 0.0 0.0 0 0
hdisk21 0.0 0.0 0.0 0 0
hdisk22 0.0 0.0 0.0 0 0
hdisk23 0.0 0.0 0.0 0 0
hdisk24 0.0 0.0 0.0 0 0
hdisk25 0.0 0.0 0.0 0 0
hdisk26 0.0 0.0 0.0 0 0
hdisk27 0.0 0.0 0.0 0 0
hdisk28 0.0 0.0 0.0 0 0
hdisk29 0.0 0.0 0.0 0 0
hdisk30 0.0 0.0 0.0 0 0
hdisk31 0.0 0.0 0.0 0 0
hdisk32 0.0 0.0 0.0 0 0
hdisk33 0.0 0.0 0.0 0 0
hdisk34 0.0 0.0 0.0 0 0
hdisk35 0.0 0.0 0.0 0 0
hdisk36 0.0 0.0 0.0 0 0
hdisk37 0.0 0.0 0.0 0 0
hdisk38 0.0 0.0 0.0 0 0
hdisk40 0.0 0.0 0.0 0 0
hdisk41 0.0 0.0 0.0 0 0
hdisk42 0.0 0.0 0.0 0 0
hdisk43 0.0 0.0 0.0 0 0
hdisk44 0.0 0.0 0.0 0 0
hdisk45 0.0 0.0 0.0 0 0
hdisk46 0.0 0.0 0.0 0 0
hdisk47 0.0 0.0 0.0 0 0
hdisk48 0.0 0.0 0.0 0 0
hdisk49 0.0 0.0 0.0 0 0
hdisk50 0.0 0.0 0.0 0 0
hdisk51 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
hdisk100 0.0 0.0 0.0 0 0
hdisk101 0.0 0.0 0.0 0 0
hdisk52 0.0 0.0 0.0 0 0
hdisk54 0.0 0.0 0.0 0 0
hdisk55 0.0 2.0 0.5 0 4
hdisk56 0.0 10.0 1.5 0 20
hdisk57 0.0 0.0 0.0 0 0
hdisk58 0.0 0.0 0.0 0 0
hdisk59 0.0 0.0 0.0 0 0
hdisk60 0.0 0.0 0.0 0 0
hdisk61 0.0 0.0 0.0 0 0
hdisk62 0.0 0.0 0.0 0 0
hdisk63 0.0 0.0 0.0 0 0
hdisk64 0.0 0.0 0.0 0 0
hdisk65 0.0 0.0 0.0 0 0
hdisk66 0.0 0.0 0.0 0 0
hdisk67 0.0 0.0 0.0 0 0
hdisk68 0.0 0.0 0.0 0 0
hdisk69 0.0 0.0 0.0 0 0
hdisk71 0.0 0.0 0.0 0 0
hdisk72 0.0 0.0 0.0 0 0
hdisk73 0.0 0.0 0.0 0 0
hdisk74 0.0 0.0 0.0 0 0
hdisk75 0.0 0.0 0.0 0 0
hdisk76 0.0 0.0 0.0 0 0
hdisk77 1.0 4.0 0.5 8 0
hdisk79 0.0 0.0 0.0 0 0
hdisk80 0.0 0.0 0.0 0 0
hdisk81 0.0 0.0 0.0 0 0
hdisk82 0.0 0.0 0.0 0 0
hdisk83 0.0 0.0 0.0 0 0
hdisk84 0.0 0.0 0.0 0 0
hdisk85 0.0 0.0 0.0 0 0
hdisk86 0.0 0.0 0.0 0 0
hdisk87 0.0 0.0 0.0 0 0
hdisk88 0.0 0.0 0.0 0 0
hdisk89 0.0 0.0 0.0 0 0
hdisk90 0.0 0.0 0.0 0 0
hdisk91 0.0 0.0 0.0 0 0
hdisk92 0.0 0.0 0.0 0 0
hdisk93 0.0 0.0 0.0 0 0
hdisk94 0.0 0.0 0.0 0 0
hdisk95 0.0 0.0 0.0 0 0
hdisk96 0.0 0.0 0.0 0 0
hdisk97 0.0 0.0 0.0 0 0
hdisk98 0.0 0.0 0.0 0 0
hdisk99 0.0 0.0 0.0 0 0
hdisk1 1.5 10.0 2.5 0 20
cd0 0.0 0.0 0.0 0 0
hdisk13 0.5 0.0 0.0 0 0
hdisk39 0.0 0.0 0.0 0 0
hdisk53 0.0 6.0 1.5 0 12
hdisk70 0.0 0.0 0.0 0 0
hdisk78 0.0 0.0 0.0 0 0
hdiskpower0 0.0 0.0 0.0 0 0
hdiskpower1 0.0 14.0 3.5 0 28
hdiskpower2 0.0 0.0 0.0 0 0
hdiskpower3 0.0 6.0 1.5 0 12
hdiskpower4 0.0 12.0 2.0 0 24
hdiskpower5 0.0 0.0 0.0 0 0
hdiskpower6 0.0 0.0 0.0 0 0
hdiskpower7 0.0 0.0 0.0 0 0
hdiskpower8 0.0 0.0 0.0 0 0
hdiskpower9 0.0 0.0 0.0 0 0
hdiskpower10 0.0 0.0 0.0 0 0
hdiskpower11 0.0 0.0 0.0 0 0
hdiskpower12 0.0 0.0 0.0 0 0
hdiskpower13 0.0 0.0 0.0 0 0
hdiskpower14 1.0 4.0 0.5 8 0
hdiskpower15 0.0 0.0 0.0 0 0
hdiskpower16 0.0 0.0 0.0 0 0
hdiskpower17 0.0 0.0 0.0 0 0
hdiskpower18 0.0 0.0 0.0 0 0
hdiskpower19 0.0 0.0 0.0 0 0
hdiskpower20 0.0 0.0 0.0 0 0
hdiskpower21 0.0 0.0 0.0 0 0
hdiskpower22 0.0 0.0 0.0 0 0
hdiskpower23 0.0 0.0 0.0 0 0
hdiskpower24 0.0 0.0 0.0 0 0
hdiskpower25 1.0 4.0 0.5 8 0
hdiskpower26 0.0 0.0 0.0 0 0
hdiskpower27 0.0 0.0 0.0 0 0
hdiskpower28 0.0 0.0 0.0 0 0
hdiskpower29 0.0 0.0 0.0 0 0
hdiskpower30 0.0 0.0 0.0 0 0
hdiskpower31 0.0 0.0 0.0 0 0
hdiskpower32 0.0 0.0 0.0 0 0
hdiskpower33 0.0 0.0 0.0 0 0
hdiskpower34 0.0 0.0 0.0 0 0
hdiskpower35 0.0 0.0 0.0 0 0
hdiskpower36 0.0 0.0 0.0 0 0
hdiskpower37 0.0 0.0 0.0 0 0
hdiskpower38 0.0 0.0 0.0 0 0
hdiskpower39 0.0 0.0 0.0 0 0
hdiskpower40 0.0 0.0 0.0 0 0
hdiskpower41 0.0 0.0 0.0 0 0
hdiskpower42 0.0 0.0 0.0 0 0
hdiskpower43 0.0 0.0 0.0 0 0
hdiskpower44 0.0 0.0 0.0 0 0
hdiskpower45 0.0 0.0 0.0 0 0
hdiskpower46 0.0 0.0 0.0 0 0
hdiskpower47 0.0 0.0 0.0 0 0
hdiskpower48 0.0 0.0 0.0 0 0
hdiskpower49 0.0 0.0 0.0 0 0

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 0.0 27.0 1.0 71.9 0.1

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk4 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk6 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk8 0.0 4.0 0.5 0 8
hdisk9 0.5 36.0 1.0 0 72
hdisk10 0.0 0.0 0.0 0 0
hdisk11 0.5 32.0 0.5 0 64
hdisk12 0.0 0.0 0.0 0 0
hdisk14 0.0 0.0 0.0 0 0
hdisk15 0.0 0.0 0.0 0 0
hdisk16 0.0 0.0 0.0 0 0
hdisk17 0.0 0.0 0.0 0 0
hdisk18 0.0 0.0 0.0 0 0
hdisk19 0.0 0.0 0.0 0 0
hdisk20 0.5 4.0 0.5 8 0
hdisk21 0.0 0.0 0.0 0 0
hdisk22 0.0 0.0 0.0 0 0
hdisk23 0.0 0.0 0.0 0 0
hdisk24 0.0 0.0 0.0 0 0
hdisk25 0.5 4.0 0.5 8 0
hdisk26 0.0 0.0 0.0 0 0
hdisk27 0.0 0.0 0.0 0 0
hdisk28 0.0 0.0 0.0 0 0
hdisk29 0.0 4.0 0.5 0 8
hdisk30 0.0 0.0 0.0 0 0
hdisk31 0.0 0.0 0.0 0 0
hdisk32 0.0 0.0 0.0 0 0
hdisk33 0.0 0.0 0.0 0 0
hdisk34 0.0 0.0 0.0 0 0
hdisk35 0.0 0.0 0.0 0 0
hdisk36 0.0 0.0 0.0 0 0
hdisk37 0.0 0.0 0.0 0 0
hdisk38 0.0 0.0 0.0 0 0
hdisk40 0.0 0.0 0.0 0 0
hdisk41 0.0 0.0 0.0 0 0
hdisk42 0.0 0.0 0.0 0 0
hdisk43 0.0 0.0 0.0 0 0
hdisk44 0.0 0.0 0.0 0 0
hdisk45 0.0 0.0 0.0 0 0
hdisk46 0.0 0.0 0.0 0 0
hdisk47 0.0 0.0 0.0 0 0
hdisk48 0.0 0.0 0.0 0 0
hdisk49 0.0 0.0 0.0 0 0
hdisk50 0.0 0.0 0.0 0 0
hdisk51 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
hdisk100 0.0 0.0 0.0 0 0
hdisk101 0.0 0.0 0.0 0 0
hdisk52 0.0 0.0 0.0 0 0
hdisk54 0.0 0.0 0.0 0 0
hdisk55 0.0 0.0 0.0 0 0
hdisk56 0.5 26.0 0.5 0 52
hdisk57 0.0 0.0 0.0 0 0
hdisk58 0.5 32.0 0.5 0 64
hdisk59 0.0 0.0 0.0 0 0
hdisk60 0.5 32.0 0.5 0 64
hdisk61 0.0 0.0 0.0 0 0
hdisk62 0.5 26.0 0.5 0 52
hdisk63 0.0 0.0 0.0 0 0
hdisk64 0.0 0.0 0.0 0 0
hdisk65 0.0 0.0 0.0 0 0
hdisk66 0.0 4.0 0.5 0 8
hdisk67 0.0 0.0 0.0 0 0
hdisk68 0.0 0.0 0.0 0 0
hdisk69 0.0 0.0 0.0 0 0
hdisk71 0.0 0.0 0.0 0 0
hdisk72 0.0 0.0 0.0 0 0
hdisk73 0.0 0.0 0.0 0 0
hdisk74 0.0 0.0 0.0 0 0
hdisk75 0.0 0.0 0.0 0 0
hdisk76 0.0 0.0 0.0 0 0
hdisk77 0.0 0.0 0.0 0 0
hdisk79 0.0 0.0 0.0 0 0
hdisk80 0.0 0.0 0.0 0 0
hdisk81 0.0 0.0 0.0 0 0
hdisk82 0.0 0.0 0.0 0 0
hdisk83 0.0 0.0 0.0 0 0
hdisk84 0.0 0.0 0.0 0 0
hdisk85 0.0 0.0 0.0 0 0
hdisk86 0.0 0.0 0.0 0 0
hdisk87 0.0 0.0 0.0 0 0
hdisk88 0.0 0.0 0.0 0 0
hdisk89 0.0 0.0 0.0 0 0
hdisk90 0.0 0.0 0.0 0 0
hdisk91 0.0 0.0 0.0 0 0
hdisk92 0.0 0.0 0.0 0 0
hdisk93 0.0 0.0 0.0 0 0
hdisk94 0.0 0.0 0.0 0 0
hdisk95 0.0 0.0 0.0 0 0
hdisk96 0.0 0.0 0.0 0 0
hdisk97 0.0 0.0 0.0 0 0
hdisk98 0.0 0.0 0.0 0 0
hdisk99 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0
hdisk13 0.5 32.0 0.5 0 64
hdisk39 0.0 0.0 0.0 0 0
hdisk53 0.0 0.0 0.0 0 0
hdisk70 0.0 0.0 0.0 0 0
hdisk78 0.0 0.0 0.0 0 0
hdiskpower0 0.0 0.0 0.0 0 0
hdiskpower1 0.0 0.0 0.0 0 0
hdiskpower2 0.0 0.0 0.0 0 0
hdiskpower3 0.0 0.0 0.0 0 0
hdiskpower4 1.0 30.0 1.0 0 60
hdiskpower5 0.5 36.0 1.0 0 72
hdiskpower6 0.5 32.0 0.5 0 64
hdiskpower7 0.5 32.0 0.5 0 64
hdiskpower8 0.5 32.0 0.5 0 64
hdiskpower9 0.5 32.0 0.5 0 64
hdiskpower10 0.5 26.0 0.5 0 52
hdiskpower11 0.0 0.0 0.0 0 0
hdiskpower12 0.0 0.0 0.0 0 0
hdiskpower13 0.0 0.0 0.0 0 0
hdiskpower14 0.0 4.0 0.5 0 8
hdiskpower15 0.0 0.0 0.0 0 0
hdiskpower16 0.5 4.0 0.5 8 0
hdiskpower17 0.0 0.0 0.0 0 0
hdiskpower18 0.0 0.0 0.0 0 0
hdiskpower19 0.0 0.0 0.0 0 0
hdiskpower20 0.0 0.0 0.0 0 0
hdiskpower21 0.5 4.0 0.5 8 0
hdiskpower22 0.0 0.0 0.0 0 0
hdiskpower23 0.0 0.0 0.0 0 0
hdiskpower24 0.0 0.0 0.0 0 0
hdiskpower25 0.0 4.0 0.5 0 8
hdiskpower26 0.0 0.0 0.0 0 0
hdiskpower27 0.0 0.0 0.0 0 0
hdiskpower28 0.0 0.0 0.0 0 0
hdiskpower29 0.0 0.0 0.0 0 0
hdiskpower30 0.0 0.0 0.0 0 0
hdiskpower31 0.0 0.0 0.0 0 0
hdiskpower32 0.0 0.0 0.0 0 0
hdiskpower33 0.0 0.0 0.0 0 0
hdiskpower34 0.0 0.0 0.0 0 0
hdiskpower35 0.0 0.0 0.0 0 0
hdiskpower36 0.0 0.0 0.0 0 0
hdiskpower37 0.0 0.0 0.0 0 0
hdiskpower38 0.0 0.0 0.0 0 0
hdiskpower39 0.0 0.0 0.0 0 0
hdiskpower40 0.0 0.0 0.0 0 0
hdiskpower41 0.0 0.0 0.0 0 0
hdiskpower42 0.0 0.0 0.0 0 0
hdiskpower43 0.0 0.0 0.0 0 0
hdiskpower44 0.0 0.0 0.0 0 0
hdiskpower45 0.0 0.0 0.0 0 0
hdiskpower46 0.0 0.0 0.0 0 0
hdiskpower47 0.0 0.0 0.0 0 0
hdiskpower48 0.0 0.0 0.0 0 0
hdiskpower49 0.0 0.0 0.0 0 0

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 0.0 25.5 0.5 73.8 0.2

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk4 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk6 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk8 0.0 0.0 0.0 0 0
hdisk9 0.0 0.0 0.0 0 0
hdisk10 0.0 0.0 0.0 0 0
hdisk11 0.0 0.0 0.0 0 0
hdisk12 0.0 0.0 0.0 0 0
hdisk14 0.0 4.0 1.0 0 8
hdisk15 0.0 0.0 0.0 0 0
hdisk16 0.0 0.0 0.0 0 0
hdisk17 0.0 0.0 0.0 0 0
hdisk18 0.0 4.0 0.5 0 8
hdisk19 0.0 0.0 0.0 0 0
hdisk20 0.0 0.0 0.0 0 0
hdisk21 0.0 0.0 0.0 0 0
hdisk22 0.0 0.0 0.0 0 0
hdisk23 0.0 0.0 0.0 0 0
hdisk24 0.0 0.0 0.0 0 0
hdisk25 0.0 0.0 0.0 0 0
hdisk26 0.0 0.0 0.0 0 0
hdisk27 0.0 0.0 0.0 0 0
hdisk28 0.0 0.0 0.0 0 0
hdisk29 0.0 0.0 0.0 0 0
hdisk30 0.0 0.0 0.0 0 0
hdisk31 0.0 0.0 0.0 0 0
hdisk32 0.0 0.0 0.0 0 0
hdisk33 0.0 0.0 0.0 0 0
hdisk34 0.0 0.0 0.0 0 0
hdisk35 0.0 0.0 0.0 0 0
hdisk36 0.0 0.0 0.0 0 0
hdisk37 0.0 0.0 0.0 0 0
hdisk38 0.0 0.0 0.0 0 0
hdisk40 0.0 0.0 0.0 0 0
hdisk41 0.0 0.0 0.0 0 0
hdisk42 0.0 0.0 0.0 0 0
hdisk43 0.0 0.0 0.0 0 0
hdisk44 0.0 0.0 0.0 0 0
hdisk45 0.0 0.0 0.0 0 0
hdisk46 0.0 0.0 0.0 0 0
hdisk47 0.0 0.0 0.0 0 0
hdisk48 0.0 0.0 0.0 0 0
hdisk49 0.0 0.0 0.0 0 0
hdisk50 0.0 0.0 0.0 0 0
hdisk51 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
hdisk100 0.0 0.0 0.0 0 0
hdisk101 0.0 0.0 0.0 0 0
hdisk52 0.0 0.0 0.0 0 0
hdisk54 0.0 0.0 0.0 0 0
hdisk55 1.0 4.0 0.5 8 0
hdisk56 0.0 0.0 0.0 0 0
hdisk57 0.0 4.0 0.5 0 8
hdisk58 0.0 0.0 0.0 0 0
hdisk59 0.0 0.0 0.0 0 0
hdisk60 0.0 0.0 0.0 0 0
hdisk61 0.0 0.0 0.0 0 0
hdisk62 0.0 6.0 1.0 0 12
hdisk63 0.0 0.0 0.0 0 0
hdisk64 0.0 0.0 0.0 0 0
hdisk65 0.0 0.0 0.0 0 0
hdisk66 0.0 0.0 0.0 0 0
hdisk67 0.0 0.0 0.0 0 0
hdisk68 0.0 0.0 0.0 0 0
hdisk69 0.0 0.0 0.0 0 0
hdisk71 0.0 0.0 0.0 0 0
hdisk72 0.0 0.0 0.0 0 0
hdisk73 0.0 0.0 0.0 0 0
hdisk74 0.0 0.0 0.0 0 0
hdisk75 0.0 0.0 0.0 0 0
hdisk76 0.0 0.0 0.0 0 0
hdisk77 0.0 4.0 0.5 0 8
hdisk79 0.0 0.0 0.0 0 0
hdisk80 0.0 0.0 0.0 0 0
hdisk81 0.0 0.0 0.0 0 0
hdisk82 0.0 0.0 0.0 0 0
hdisk83 0.0 0.0 0.0 0 0
hdisk84 0.0 0.0 0.0 0 0
hdisk85 0.0 0.0 0.0 0 0
hdisk86 0.0 0.0 0.0 0 0
hdisk87 0.0 0.0 0.0 0 0
hdisk88 0.0 0.0 0.0 0 0
hdisk89 0.0 0.0 0.0 0 0
hdisk90 0.0 0.0 0.0 0 0
hdisk91 0.0 0.0 0.0 0 0
hdisk92 0.0 0.0 0.0 0 0
hdisk93 0.0 0.0 0.0 0 0
hdisk94 0.0 0.0 0.0 0 0
hdisk95 0.0 0.0 0.0 0 0
hdisk96 0.0 0.0 0.0 0 0
hdisk97 0.0 0.0 0.0 0 0
hdisk98 0.0 0.0 0.0 0 0
hdisk99 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0
hdisk13 0.0 0.0 0.0 0 0
hdisk39 0.0 0.0 0.0 0 0
hdisk53 0.0 0.0 0.0 0 0
hdisk70 0.0 0.0 0.0 0 0
hdisk78 0.0 0.0 0.0 0 0
hdiskpower0 0.0 0.0 0.0 0 0
hdiskpower1 0.0 0.0 0.0 0 0
hdiskpower2 0.0 0.0 0.0 0 0
hdiskpower3 1.0 4.0 0.5 8 0
hdiskpower4 0.0 0.0 0.0 0 0
hdiskpower5 0.0 4.0 0.5 0 8
hdiskpower6 0.0 0.0 0.0 0 0
hdiskpower7 0.0 0.0 0.0 0 0
hdiskpower8 0.0 0.0 0.0 0 0
hdiskpower9 0.0 0.0 0.0 0 0
hdiskpower10 0.0 10.0 2.0 0 20
hdiskpower11 0.0 0.0 0.0 0 0
hdiskpower12 0.0 0.0 0.0 0 0
hdiskpower13 0.0 0.0 0.0 0 0
hdiskpower14 0.0 4.0 0.5 0 8
hdiskpower15 0.0 0.0 0.0 0 0
hdiskpower16 0.0 0.0 0.0 0 0
hdiskpower17 0.0 0.0 0.0 0 0
hdiskpower18 0.0 0.0 0.0 0 0
hdiskpower19 0.0 0.0 0.0 0 0
hdiskpower20 0.0 0.0 0.0 0 0
hdiskpower21 0.0 0.0 0.0 0 0
hdiskpower22 0.0 0.0 0.0 0 0
hdiskpower23 0.0 0.0 0.0 0 0
hdiskpower24 0.0 0.0 0.0 0 0
hdiskpower25 0.0 4.0 0.5 0 8
hdiskpower26 0.0 0.0 0.0 0 0
hdiskpower27 0.0 0.0 0.0 0 0
hdiskpower28 0.0 0.0 0.0 0 0
hdiskpower29 0.0 0.0 0.0 0 0
hdiskpower30 0.0 0.0 0.0 0 0
hdiskpower31 0.0 0.0 0.0 0 0
hdiskpower32 0.0 0.0 0.0 0 0
hdiskpower33 0.0 0.0 0.0 0 0
hdiskpower34 0.0 0.0 0.0 0 0
hdiskpower35 0.0 0.0 0.0 0 0
hdiskpower36 0.0 0.0 0.0 0 0
hdiskpower37 0.0 0.0 0.0 0 0
hdiskpower38 0.0 0.0 0.0 0 0
hdiskpower39 0.0 0.0 0.0 0 0
hdiskpower40 0.0 0.0 0.0 0 0
hdiskpower41 0.0 0.0 0.0 0 0
hdiskpower42 0.0 0.0 0.0 0 0
hdiskpower43 0.0 0.0 0.0 0 0
hdiskpower44 0.0 0.0 0.0 0 0
hdiskpower45 0.0 0.0 0.0 0 0
hdiskpower46 0.0 0.0 0.0 0 0
hdiskpower47 0.0 0.0 0.0 0 0
hdiskpower48 0.0 0.0 0.0 0 0
hdiskpower49 0.0 0.0 0.0 0 0

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 0.0 30.7 1.7 63.6 4.0

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 2.5 33.8 8.5 68 0
hdisk2 0.0 0.0 0.0 0 0
hdisk4 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk6 0.0 0.0 0.0 0 0
hdisk7 0.5 69.7 2.0 4 136
hdisk8 0.0 39.8 1.5 0 80
hdisk9 1.0 71.6 2.0 16 128
hdisk10 1.5 35.8 1.0 8 64
hdisk11 0.0 47.8 1.0 0 96
hdisk12 0.0 43.8 1.5 0 88
hdisk14 0.5 43.8 3.0 0 88
hdisk15 0.5 95.5 1.5 0 192
hdisk16 0.0 0.0 0.0 0 0
hdisk17 0.0 41.8 1.0 0 84
hdisk18 1.0 47.8 1.0 0 96
hdisk19 1.0 35.8 1.0 8 64
hdisk20 1.0 47.8 1.0 0 96
hdisk21 2.0 69.7 2.0 12 128
hdisk22 0.5 47.8 2.0 8 88
hdisk23 1.0 87.6 2.0 8 168
hdisk24 0.5 63.7 1.5 0 128
hdisk25 0.5 31.8 0.5 0 64
hdisk26 1.0 63.7 1.5 0 128
hdisk27 1.5 33.8 1.0 4 64
hdisk28 1.5 53.7 1.5 0 108
hdisk29 0.5 65.7 1.5 0 132
hdisk30 0.0 77.6 1.5 0 156
hdisk31 0.0 0.0 0.0 0 0
hdisk32 0.0 0.0 0.0 0 0
hdisk33 0.0 0.0 0.0 0 0
hdisk34 0.0 0.0 0.0 0 0
hdisk35 0.0 0.0 0.0 0 0
hdisk36 0.0 0.0 0.0 0 0
hdisk37 0.0 0.0 0.0 0 0
hdisk38 0.0 0.0 0.0 0 0
hdisk40 0.0 0.0 0.0 0 0
hdisk41 0.0 0.0 0.0 0 0
hdisk42 0.0 0.0 0.0 0 0
hdisk43 0.0 0.0 0.0 0 0
hdisk44 0.0 0.0 0.0 0 0
hdisk45 0.0 0.0 0.0 0 0
hdisk46 0.0 0.0 0.0 0 0
hdisk47 0.0 0.0 0.0 0 0
hdisk48 0.0 0.0 0.0 0 0
hdisk49 0.0 0.0 0.0 0 0
hdisk50 0.0 0.0 0.0 0 0
hdisk51 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
hdisk100 0.0 0.0 0.0 0 0
hdisk101 0.0 0.0 0.0 0 0
hdisk52 0.0 0.0 0.0 0 0
hdisk54 0.0 0.0 0.0 0 0
hdisk55 1.0 29.9 1.5 4 56
hdisk56 0.0 55.7 1.5 0 112
hdisk57 0.5 31.8 1.0 0 64
hdisk58 0.5 67.7 2.5 4 132
hdisk59 0.5 47.8 1.0 0 96
hdisk60 0.0 51.7 1.0 0 104
hdisk61 0.0 27.9 1.0 0 56
hdisk62 0.0 75.6 4.5 0 152
hdisk63 1.0 23.9 1.5 8 40
hdisk64 0.5 95.5 2.0 0 192
hdisk65 2.0 57.7 1.5 8 108
hdisk66 0.5 47.8 1.0 0 96
hdisk67 1.0 63.7 1.0 0 128
hdisk68 0.5 47.8 1.0 0 96
hdisk69 0.5 43.8 1.5 0 88
hdisk71 0.5 43.8 1.0 0 88
hdisk72 0.5 51.7 1.5 0 104
hdisk73 2.0 71.6 2.0 8 136
hdisk74 0.5 31.8 0.5 0 64
hdisk75 1.0 63.7 1.0 0 128
hdisk76 0.5 43.8 1.0 0 88
hdisk77 0.0 31.8 0.5 0 64
hdisk79 0.0 95.5 1.5 0 192
hdisk80 0.0 0.0 0.0 0 0
hdisk81 0.0 0.0 0.0 0 0
hdisk82 0.0 0.0 0.0 0 0
hdisk83 0.0 0.0 0.0 0 0
hdisk84 0.0 0.0 0.0 0 0
hdisk85 0.0 0.0 0.0 0 0
hdisk86 0.0 0.0 0.0 0 0
hdisk87 0.0 0.0 0.0 0 0
hdisk88 0.0 0.0 0.0 0 0
hdisk89 0.0 0.0 0.0 0 0
hdisk90 0.0 0.0 0.0 0 0
hdisk91 0.0 0.0 0.0 0 0
hdisk92 0.0 0.0 0.0 0 0
hdisk93 0.0 0.0 0.0 0 0
hdisk94 0.0 0.0 0.0 0 0
hdisk95 0.0 0.0 0.0 0 0
hdisk96 0.0 0.0 0.0 0 0
hdisk97 0.0 0.0 0.0 0 0
hdisk98 0.0 0.0 0.0 0 0
hdisk99 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0
hdisk13 0.0 67.7 1.5 0 136
hdisk39 0.0 0.0 0.0 0 0
hdisk53 0.0 0.0 0.0 0 0
hdisk70 1.5 93.5 3.5 12 176
hdisk78 0.0 29.9 1.5 0 60
hdiskpower0 0.0 0.0 0.0 0 0
hdiskpower1 0.0 0.0 0.0 0 0
hdiskpower2 0.0 0.0 0.0 0 0
hdiskpower3 1.5 99.5 3.5 8 192
hdiskpower4 0.0 95.5 3.0 0 192
hdiskpower5 1.5 103.5 3.0 16 192
hdiskpower6 2.0 103.5 3.5 12 196
hdiskpower7 0.5 95.5 2.0 0 192
hdiskpower8 0.0 95.5 2.5 0 192
hdiskpower9 0.0 95.5 2.5 0 192
hdiskpower10 0.5 119.4 7.5 0 240
hdiskpower11 1.5 119.4 3.0 8 232
hdiskpower12 0.5 95.5 2.0 0 192
hdiskpower13 2.0 99.5 2.5 8 192
hdiskpower14 1.5 95.5 2.0 0 192
hdiskpower15 2.0 99.5 2.0 8 192
hdiskpower16 1.5 95.5 2.0 0 192
hdiskpower17 2.5 113.4 3.5 12 216
hdiskpower18 2.0 141.3 5.5 20 264
hdiskpower19 1.5 131.3 3.0 8 256
hdiskpower20 1.0 115.4 3.0 0 232
hdiskpower21 2.5 103.5 2.5 8 200
hdiskpower22 1.5 95.5 2.0 0 192
hdiskpower23 2.5 97.5 2.0 4 192
hdiskpower24 1.5 97.5 2.5 0 196
hdiskpower25 0.5 97.5 2.0 0 196
hdiskpower26 0.0 107.5 3.0 0 216
hdiskpower27 0.0 95.5 1.5 0 192
hdiskpower28 0.0 0.0 0.0 0 0
hdiskpower29 0.0 0.0 0.0 0 0
hdiskpower30 0.0 0.0 0.0 0 0
hdiskpower31 0.0 0.0 0.0 0 0
hdiskpower32 0.0 0.0 0.0 0 0
hdiskpower33 0.0 0.0 0.0 0 0
hdiskpower34 0.0 0.0 0.0 0 0
hdiskpower35 0.0 0.0 0.0 0 0
hdiskpower36 0.0 0.0 0.0 0 0
hdiskpower37 0.0 0.0 0.0 0 0
hdiskpower38 0.0 0.0 0.0 0 0
hdiskpower39 0.0 0.0 0.0 0 0
hdiskpower40 0.0 0.0 0.0 0 0
hdiskpower41 0.0 0.0 0.0 0 0
hdiskpower42 0.0 0.0 0.0 0 0
hdiskpower43 0.0 0.0 0.0 0 0
hdiskpower44 0.0 0.0 0.0 0 0
hdiskpower45 0.0 0.0 0.0 0 0
hdiskpower46 0.0 0.0 0.0 0 0
hdiskpower47 0.0 0.0 0.0 0 0
hdiskpower48 0.0 0.0 0.0 0 0
hdiskpower49 0.0 0.0 0.0 0 0


nddb@npgnd1> vmstat 2 10
System Configuration: lcpu=4 mem=8192MB
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
1 1 1002612 2972 0 0 1 306 863 0 743 12787 834 12 2 85 1
1 0 1002624 2960 0 0 0 0 0 0 681 18978 477 6 1 93 0
0 0 1002628 2956 0 0 0 0 0 0 650 23203 598 6 1 93 0
2 0 1002664 2918 0 0 0 0 0 0 685 16583 551 26 2 72 0
1 0 1002942 2018 0 0 0 0 0 0 1619 19461 1173 38 2 60 0
2 0 1002685 2220 0 0 0 0 0 0 757 23094 1913 37 3 60 0
1 0 1002685 2219 0 0 0 0 0 0 815 22887 875 26 2 72 0
1 0 1002691 2214 0 0 0 0 0 0 815 23424 974 28 2 70 0
1 0 1002962 1943 0 0 0 0 0 0 896 30546 2747 35 3 62 0
1 0 1002719 2187 0 0 0 0 0 0 851 23059 1292 29 2 69 0
nddb@npgnd1>


Topas Monitor for host: npgnd1 EVENTS/QUEUES FILE/TTY
Tue Feb 8 18:58:54 2005 Interval: 2 Cswitch Readch
Syscall Writech
Kernel | | Reads Rawin
User | | Writes Ttyout
Wait | | Forks Igets
Idle | | Execs Namei
Runqueue Dirblk
Network KBPS I-Pack O-Pack KB-In KB-Out Waitqueue

PAGING MEMORY
Faults Real,MB
Steals % Comp
Disk Busy% KBPS TPS KB-Read KB-Writ PgspIn % Noncomp
PgspOut % Client
PageIn
PageOut PAGING SPACE
Sios Size,MB
% Used
NFS (calls/sec) % Free
ServerV2
ClientV2 Press:
ServerV3 "h" for help
Topas Monitor for host: npgnd1 EVENTS/QUEUES FILE/TTY
Tue Feb 8 18:58:56 2005 Interval: 2 Cswitch Readch
Syscall Writech
Kernel | | Reads Rawin
User | | Writes Ttyout
Wait | | Forks Igets
Idle | | Execs Namei
Runqueue Dirblk
Network KBPS I-Pack O-Pack KB-In KB-Out Waitqueue

PAGING MEMORY
Faults Real,MB
Steals % Comp
Disk Busy% KBPS TPS KB-Read KB-Writ PgspIn % Noncomp
PgspOut % Client
PageIn
PageOut PAGING SPACE
Sios Size,MB
% Used
NFS (calls/sec) % Free
ServerV2
ClientV2 Press:
ServerV3 "h" for help
Kernel 1.7 |# | Reads Rawin
User 12.0 |#### | Writes Ttyout
 
1 1 1004114 4087 0 0 0 0 0 0 676 13504 920 36 1 53 10
# vmstat 2 10
System Configuration: lcpu=4 mem=8192MB
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
1 1 1004202 4029 0 0 1 305 860 0 743 12813 833 12 2 85 1
1 0 1004469 3530 0 0 0 0 0 0 882 13478 1172 26 1 73 0
1 0 1004207 4295 0 0 0 342 846 0 692 28222 1239 36 2 61 1
1 0 1004207 3974 0 0 0 0 0 0 946 22957 1486 32 3 65 0
2 0 1004207 3561 0 0 0 122 209 0 740 27906 1455 39 3 57 0
1 1 1004353 3429 0 1 0 0 0 0 1072 29920 2454 42 4 46 8
2 0 1004625 3459 0 0 0 237 267 0 685 27949 1112 34 3 59 4
1 0 1004663 2954 0 0 0 0 0 0 1054 25882 1932 35 3 57 5
1 0 1004469 2021 0 0 0 358 542 0 861 24631 1227 31 2 57 9
1 1 1004585 2895 0 0 0 0 0 0 1047 28286 2475 36 6 50 9
 

PAGING MEMORY
Faults 29 Real,MB 8191
Steals 0 % Comp 48.1
Disk Busy% KBPS TPS KB-Read KB-Writ PgspIn 0 % Noncomp 52.6
PgspOut 0 % Client 53.3
PageIn 0
PageOut 308 PAGING SPACE
Sios 309 Size,MB 5120
Name PID CPU% PgSp Owner % Used 4.3
oracle 4370638 0.0 4.8 nddb NFS (calls/sec) % Free 95.6
lrud 28686 0.0 0.1 root ServerV2 0
ImEx 4268208 0.0 10.2 ndio ClientV2 0 Press:
TclSh 4399318 0.0 3.3 ndx ServerV3 0 "h" for help
Network KBPS I-Pack O-Pack KB-In KB-Out Waitqueue
Tue Feb 8 19:16:49 2005 Interval: 2 Cswitch 0 Readch 0
Syscall 0 Writech 13
Kernel 18.6 |###### | Reads 0 Rawin 0
User 26.6 |######## | Writes 0 Ttyout 0
Wait 0.9 |# | Forks 0 Igets 0
Idle 53.9 |################ | Execs 0 Namei 29
Runqueue 0.0 Dirblk 0
Network KBPS I-Pack O-Pack KB-In KB-Out Waitqueue 0.0
en0 32.4 54.5 44.5 14.6 17.9
 
Runqueue 0.0 Dirblk 0
Network KBPS I-Pack O-Pack KB-In KB-Out Waitqueue 0.0
en0 33.2 49.0 48.0 7.8 25.3
lo0 16.5 36.5 36.5 8.3 8.3 PAGING MEMORY
en1 0.4 8.5 0.0 0.4 0.0 Faults 1474 Real,MB 8191
Steals 1347 % Comp 48.3
Disk Busy% KBPS TPS KB-Read KB-Writ PgspIn 0 % Noncomp 52.5
hdisk0 13.5 418.0 38.5 0.0 418.0 PgspOut 104 % Client 53.2
hdisk1 12.5 418.0 38.0 0.0 418.0 PageIn 1
hdisk16 0.5 4.0 0.5 0.0 4.0 PageOut 117 PAGING SPACE
Sios 118 Size,MB 5120
Name PID CPU% PgSp Owner % Used 4.4
oracle 4370638 25.1 4.8 nddb NFS (calls/sec) % Free 95.5
topas 2129992 3.7 12.3 root ServerV2 0
oracle 4182016 0.7 4.9 nddb ClientV2 0 Press:
OraNotif 4628504 0.4 22.7 nddb ServerV3 0 "h" for help
 
HI,

In the above data I do dnot see any system stress/wait.
Also there is no no shown paging,so for no your maxfree 80%
will not cause a big problem,however you better keep then on 40-50% max.
Try checking your network - there is a possibility that it's not at it's best - not enough boards/wrong full/half duplex config,etc.

Long live king Moshiach !
 
It may be down to JFS2 read this


If that doesn't help stick statspack on or download a copy of quest central
Do you use locally managed tablespaces?



Mike

"A foolproof method for sculpting an elephant: first, get a huge block of marble, then you chip away everything that doesn't look like an elephant.
 
I'm told the network is functioning properly.

Regarding the JFS2 CIO/DIO article, I've already read that and our Sys Admin says it buys us nothing to use it in a SAN environment.

Yes, we're using LMT and I have several StatsPack reports which don't give me much to go on.

What commands can I use to keep tabs on our memory utilization and swapping? And which ones provide high-water-marks vs. current statistics? Specifically, what columns do I want to look at and at what threshold should I be alarmed?

Thanks.
 
I think your admin needs to read the article again.... We had the same problem with our db2 database. The jfs2 file system can create a bottle neck with databases. Moving to CIO fixes the bottle neck by utilizing the cache on the disk subsystem. You need to definitely look at the kernel parameters you mentioned also.
 
FWIW - We start with our maxperm% and minperm% set to 20 and 10 respectively on database servers. Since Oracle does it's own buffering, stealing memory away from the file system buffers generally does not hurt much. Use this with caution though as different databases can react different depending on other factors.

By default, the max number of file system buffers is set to 196. We typically bump this up to at least 512 or higher for any system that will be doing a lot of I/O to JFS file systems. You can see the number of I/Os blocked due to lack of file system buffers by running vmstat -v and look for the "filesystem I/Os blocked with no fsbuf" line. If this is a high number (anything over a couple thousand), you may want to bump this up. This does not require a reboot, but does require the file systems to be remounted to take effect.

Also, IBM has a helpful Redbook on database performance tuning on AIX at
You may also want to get your system admin to run a filemon. Two things that may jump out in the filemon output are:
1) You are hammering the root disks for paging space (though you don't seem to be using much paging space).

2) You are hammering a log logical volume. This is very often the case with AIX and JFS file systems. Many admins do not realize that they need to create separate JFS logs for file systems with heavy I/O. Additionally, since Oracle logs all of its transactions, you can decide whether or not to turn off JFS journaling (with the -nointegrity mount option) for Oracle data file systems. This will ensure that you are not logging both at the JFS and Oracle levels.

Hope this helps.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top