Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

CPU / Memory Problem

Status
Not open for further replies.

PTameris

Technical User
Jul 16, 2004
15
0
0
NL
Hi,

We have a seriuos performance problem with our AIX-P-series server & AIX 4.3.3

The P-series is a S85 server with 8 CPU and 24 GB internal memory (and 6 GB Paging space)

We are using a Oracle 8.1.7 database and have around 1000 oracle connections.

The last month we are have more and more Paging to disk and have on the busy hours (10:30 + 15:30 CET) a 100 % CPU load

During to IBM this machine can handle arounf 2500 oracle connections....

So what is wrong with our settings?
Why do it already Page to disk?

For your info i will sent you some info from the machine:
#lsattr -E -l sys0 -a realmem
realmem 25165824 Amount of usable physical memory in Kbytes False

#lsps -s
Total Paging Space Percent Used
6144MB 24%



#lsps -a
Page Space Physical Volume Volume Group Size %Used Active Auto Type
paging01 hdisk1 pagevg 2048MB 24 yes yes lv
paging00 hdisk2 perfvg 2048MB 24 yes yes lv
hd6 hdisk0 rootvg 2048MB 24 yes yes lv

# vmstat 2
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
2 3 1470223 210508 0 0 0 1422 1282 0 414 1971 1500 88 94 99 59
3 7 1465485 214707 0 0 0 0 0 0 2848 31953 14591 30 18 26 27
3 8 1467528 209350 0 0 0 0 0 0 3217 40869 25871 27 22 23 28
3 8 1467812 204489 0 0 0 0 0 0 3082 28764 31617 24 22 26 28
2 7 1468104 198725 0 0 0 0 0 0 3011 18157 33753 19 14 30 36
4 7 1469511 192674 0 0 0 0 0 0 2993 30201 31152 19 26 28 27
3 7 1469538 190357 0 0 0 0 0 0 2162 20648 18488 13 21 32 34
2 6 1469998 189132 0 0 0 0 0 0 1979 21613 13117 14 19 39 29
3 5 1470587 187379 0 0 0 0 0 0 1947 23281 12188 15 18 36 31
2 6 1472608 184982 0 0 0 0 0 0 2734 51722 12550 21 22 38 18
5 7 1467856 188993 0 0 0 0 0 0 3088 78239 13191 20 35 23 21
3 6 1473987 176793 0 0 0 0 0 0 2783 25293 29238 20 36 22 23
3 5 1468310 176252 0 0 0 0 0 0 2961 28108 34162 19 34 23 24
6 7 1465294 174925 0 0 0 0 0 0 2946 38061 31417 18 34 24 24
4 5 1465111 173676 0 0 0 0 0 0 2368 28924 13782 29 30 18 23
5 6 1463706 173446 0 0 0 0 0 0 2135 25798 14247 29 28 23 21
6 5 1462929 175498 0 0 0 0 0 0 1944 20058 10553 28 21 29 22
6 5 1463532 188869 0 0 0 0 0 0 1731 23732 10750 24 24 32 21
7 6 1463086 185838 0 0 0 0 0 0 2570 19635 28689 17 32 31 21
3 6 1462752 182424 0 0 0 0 0 0 2605 17292 35752 15 31 28 27
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
2 10 1462565 177337 0 0 0 0 0 0 2867 11434 36568 11 19 33 36
3 5 1462554 172417 0 0 0 0 0 0 2590 17417 31772 13 14 38 36
6 8 1463786 166770 0 0 0 0 0 0 2516 32620 28999 17 22 36 25
1 4 1464677 161805 0 0 0 0 0 0 2659 24602 26877 17 12 45 25
2 7 1463744 156665 0 0 0 0 0 0 2523 31844 32554 16 15 41 27
2 6 1463745 152177 0 0 0 0 0 0 2476 41847 28417 16 13 39 32
2 7 1463550 148351 0 0 0 0 0 0 2473 34091 26570 17 13 36 34
4 8 1463593 144694 0 0 0 0 0 0 2516 31392 24196 27 16 26 31
5 8 1464614 139329 0 0 0 0 0 0 2758 33268 29344 18 19 30 34
2 6 1464715 134423 0 0 0 0 0 0 2699 26064 29841 17 13 34 36
5 9 1464788 139100 0 0 0 0 0 0 2464 18065 24575 14 12 36 38
3 8 1464788 134384 0 0 0 0 0 0 2625 15827 30110 11 13 29 47
2 7 1464790 130033 0 0 0 0 0 0 2811 21990 28979 17 12 32 39
4 7 1464936 124708 0 0 0 0 0 0 2663 42988 31460 21 15 25 39
3 9 1465571 120795 0 0 0 935 9202 0 2818 46425 33260 20 18 22 40
4 9 1467351 116400 0 0 0 1243 46117 0 3322 45722 32458 18 17 21 44
2 7 1465766 117709 0 0 0 1111 71609 0 2334 57352 14443 23 13 26 38
3 7 1464908 117509 0 0 0 2729 6303 0 3101 38978 27375 43 21 18 18
3 7 1465483 116732 0 0 0 2262 5635 0 2517 37283 20649 30 14 28 28
2 7 1465876 116332 0 0 0 2280 22628 0 2417 28256 22816 20 13 30 38
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
2 7 1461513 120749 0 0 0 816 12165 0 2298 43533 14330 16 23 26 35
1 10 1463489 118674 0 0 0 1037 14676 0 2530 32953 17428 15 10 30 45
1 10 1462956 119265 0 0 0 1249 69037 0 2268 26853 16514 12 11 24 53
1 12 1462982 119224 0 0 0 2929 26019 0 2935 24471 34872 12 16 22 50
2 10 1463226 118995 0 0 0 3121 31686 0 3083 28326 41683 24 24 18 35
2 10 1463260 118968 0 0 0 2406 75458 0 2590 17752 26333 18 18 24 40
2 8 1463260 118963 0 0 0 645 2652 0 2100 16569 15367 16 7 38 39
1 8 1463260 118977 0 0 0 494 1748 0 1813 12472 12932 12 5 43 40
1 8 1463695 118582 0 0 0 921 2952 0 1832 15187 14932 9 8 42 41
5 10 1463510 118585 0 0 0 1231 5937 0 2319 44205 15756 14 17 27 42
1 9 1464034 117898 0 0 0 950 3347 0 2409 31598 15772 13 14 28 45
3 8 1464041 153910 0 0 0 148 436 0 2257 28270 17211 15 30 22 33
3 7 1463481 150684 0 0 0 0 0 0 2201 22733 18810 17 29 25 29
2 8 1463306 150219 0 0 0 0 0 0 1707 31664 7651 16 24 25 35
1 5 1463309 148564 0 0 0 0 0 0 1557 7161 10611 5 17 50 28
6 4 1463309 145430 0 0 0 0 0 0 1861 6222 16640 5 42 35 18
2 4 1462801 143079 0 0 0 0 0 0 1704 7839 17125 8 33 43 17
2 4 1462357 137966 0 0 0 0 0 0 2239 8996 28768 11 30 40 20
1 5 1462456 133674 0 0 0 0 0 0 2072 4598 24847 5 21 49 25


Greetings
Peter
 
from what you send i can't see paging activity.

if your are sure that you are experiencing paging problems on busy hours check vmtune.

you should post the output from svmon -G in busy hours and from /usr/samples/kernel/vmtune.

and also from iostat 1 .

bye.
 
Dear Raztaboule,

Hereby the output your requested...
Ofcourse it wasn't so very busy today on the system....

The proble occured around 2 months ago, befoe that there wasn't swapping (paging to disk) at all. (the normal 1%)
But the last weeks we have seen it grow to now 24%

And i think because ot that the cpu gets overloaded, because they has to do much more work....

or is my problem releated to network problems?

We have a NAS and all disks are mounted as NFS
We have already try to tune the maxperm & maxclient parameter and have it bring done from 80 to 50 now...

Hereby the results and the output of a topas screen:


# vmtune
vmtune: current values:
-p -P -r -R -f -F -N -W
minperm maxperm minpgahead maxpgahead minfree maxfree pd_npages maxrandwrt
629138 3145694 4 16 960 992 524288 0

-M -w -k -c -b -B -u -l -d
maxpin npswarn npskill numclust numfsbufs hd_pbuf_cnt lvm_bufcnt lrubucket defps
5033112 49152 12288 1 93 176 9 131072 1

-s -n -S -L -g -h
sync_release_ilock nokilluid v_pinshm lgpg_regions lgpg_size strict_maxperm
1 0 0 0 0 0

-t
maxclient
3145694

PTA balance threshold percentage = 50.0%

number of valid memory pages = 6291389 maxperm=50.0% of real memory
maximum pinable=80.0% of real memory minperm=10.0% of real memory
number of file memory pages = 3247930 numperm=51.6% of real memory

number of compressed memory pages = 0 compressed=0.0% of real memory
number of client memory pages = 2474143 numclient=39.3% of real memory
# of remote pgs sched-pageout = 0 maxclient=50.0% of real memory



# svmon -G
size inuse free pin virtual
memory 6291389 5008271 1283118 316188 1667754
pg space 1572864 375902

work pers clnt
pin 316188 0 0
in use 1760282 773802 2474187



#iostat 1 10
tty: tin tout avg-cpu: % user % sys % idle % iowait
0.2 -72.9 14.1 15.1 61.3 9.5

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk4 0.0 0.0 0.0 135 545840
hdisk6 0.0 0.0 0.0 8 0
hdisk5 0.0 0.0 0.0 8 12600
hdisk7 0.0 0.0 0.0 6 452
hdisk0 0.0 0.0 0.0 17904 545844
hdisk1 0.0 0.0 0.0 17008 12600
hdisk2 0.0 0.0 0.0 17004 0
hdisk3 0.0 0.0 0.0 16802 5024
cd0 0.0 0.0 0.0 0 0

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 942.0 25.5 38.9 11.4 24.2

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk4 100.0 1205.0 255.0 0 1205
hdisk6 0.0 0.0 0.0 0 0
hdisk5 10.0 80.0 20.0 0 80
hdisk7 0.0 0.0 0.0 0 0
hdisk0 100.0 1222.0 259.0 0 1222
hdisk1 11.0 80.0 20.0 0 80
hdisk2 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 0.0 18.5 46.9 16.2 18.4

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk4 100.0 1142.0 242.0 0 1142
hdisk6 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk0 100.0 1226.0 260.0 0 1226
hdisk1 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 1092.0 17.1 24.8 22.5 35.6

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk4 100.0 1085.0 226.0 0 1085
hdisk6 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk0 96.0 1093.0 230.0 0 1093
hdisk1 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 0.0 17.0 37.7 21.6 23.7

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk4 95.9 590.3 151.8 0 591
hdisk6 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk0 70.9 477.4 128.8 0 478
hdisk1 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 1102.0 21.1 47.0 12.4 19.5

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk4 88.0 493.0 118.0 0 493
hdisk6 0.0 0.0 0.0 0 0
hdisk5 2.0 16.0 3.0 0 16
hdisk7 1.0 8.0 2.0 0 8
hdisk0 92.0 497.0 119.0 0 497
hdisk1 1.0 16.0 3.0 0 16
hdisk2 0.0 0.0 0.0 0 0
hdisk3 7.0 32.0 10.0 0 32
cd0 0.0 0.0 0.0 0 0

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 0.0 18.0 49.1 10.5 22.4

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk4 50.0 408.0 76.0 0 408
hdisk6 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk0 50.0 404.0 75.0 0 404
hdisk1 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk3 3.0 12.0 0.0 0 12
cd0 0.0 0.0 0.0 0 0

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 1032.0 15.6 52.9 12.4 19.1

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk4 5.0 12.0 3.0 0 12
hdisk6 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk0 7.0 12.0 3.0 0 12
hdisk1 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 0.0 11.2 50.7 19.9 18.2

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk4 0.0 0.0 0.0 0 0
hdisk6 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 1101.0 7.9 49.8 26.1 16.2

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk4 2.0 20.0 5.0 0 20
hdisk6 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk0 3.0 20.0 5.0 0 20
hdisk1 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0


# topas
Topas Monitor for host: nlamsds101 EVENTS/QUEUES FILE/TTY
Sun Oct 10 19:20:59 2004 Interval: 2 Cswitch 37701 Readch 15.5M
Syscall 26084 Writech 2566.9K
Kernel 27.1 |######## | Reads 2283 Rawin 0
User 39.5 |########### | Writes 154 Ttyout 913
Wait 23.6 |####### | Forks 42 Igets 0
Idle 9.6 |### | Execs 41 Namei 923
Runqueue 4.5 Dirblk 73
Interf KBPS I-Pack O-Pack KB-In KB-Out Waitqueue 7.1
en0 19883 13056 7427 16407 3476
en1 0.0 0.0 0.0 0.0 0.0 PAGING MEMORY
Faults 7335 Real,MB 24575
Disk Busy% KBPS TPS KB-Read KB-Writ Steals 3895 % Comp 20.7
hdisk4 0.5 4.0 1.0 0.0 4.0 PgspIn 0 % Noncomp 79.5
hdisk0 0.5 4.0 1.0 0.0 4.0 PgspOut 0 % Client 52.9
hdisk3 0.0 0.0 0.0 0.0 0.0 PageIn 3821
hdisk5 0.0 0.0 0.0 0.0 0.0 PageOut 616 PAGING SPACE
hdisk1 0.0 0.0 0.0 0.0 0.0 Sios 3097 Size,MB 6144
% Used 24.4
kbiod (47552) 14.4% PgSp: 0.2mb root % Free 75.5
oracle (348674 10.6% PgSp: 5.5mb prdora00
oracle (147306 6.6% PgSp: 5.9mb prdora00
oracle (185832 6.3% PgSp: 5.4mb prdora00 Press "h" for help screen.
oracle (93570) 5.9% PgSp: 2.5mb prdora00 Press "q" to quit program.
oracle (238738 5.1% PgSp: 2.5mb prdora00
oracle (333116 2.6% PgSp: 3.1mb prdora00
oracle (192050 2.1% PgSp: 5.3mb prdora00
ksh (157722 1.3% PgSp: 0.6mb prdora00
oracle (209446 1.1% PgSp: 3.6mb prdora00
lrud (2580) 0.8% PgSp: 0.2mb root
sqlplus (154558 0.5% PgSp: 0.8mb prdora00
oracle (146500 0.5% PgSp: 1.3mb prdora00
topas (365184 0.4% PgSp: 0.5mb dcpetam1
aioserver(32766) 0.4% PgSp: 0.2mb root
aioserver(34832) 0.4% PgSp: 0.2mb root


thx & greetings
Peter Tameris
 
Hi,

Concerning vmtune,you have the followin swiths :
-p 629138 pages of 4Kbytes size ( 10% of RAM in Kbytes )
-P 3145694 pages of 4Kbytes size ( 50% of RAM in Kbytes )

-h 0 (strict_maxperm is not set )

Because you did not set -h 1 ( strict_maxperm ), the % of RAM may be used over the 50% you specified for -P switch of vmtune to store buffer files in memory if there is too much I/O activity. This may be on reason of your system swapping activity.

set strict_maxperm this way :
/usr/samples/kernel/vmtune -h 1 to force the system to write buffer files to disk when the % of RAM used for this buffers exceeds 50%.

Ali


 
Hi Ali,

We want to minimize the swapping computationalpages to pagingspace(disk)...
Swapping from filepages to disk is therefore the more wanted situation...

Do you know if there is an administration of computational or filepage kept in memory?

OR....
We are using NFS (and the most data[Oracle Database] is stored on that NFS-disks).
So is it in our situation better to swap the filepages to the (NFS)-disks or is the performance better if we swap the computationalpages to the pagingspace(local-disks)...

Perhaps you want to know the rest of our infrastructure is:
We have rootvg on the local disk(s)
We have 2 Netapp-filers and each one has 1 transparent volume...
We have 2 * 16 NFS mount-points to those (volumes)filers

We have only 1 GIGAbit ethernet card active

Do you have any suggestions about our vmtune parameters?

Greetings
Peter Tameris
 
Hi Peter,

Well, I am sorry because not very experienced with NFS. All I can say is when we used strict_maxperm (-h 1) for vmtune, our system is forced to flush out buffer files to disk.
This way we were sure 80% of RAM is always available for the system. The number of file memory pages (numperm) will never be greater than maxperm.

Take a look at the following page, maybe it will give you some tips.


Best regards
Ali
 
Hi -->
I have brief review on your issue i am more concern on the disk utilisation of hdisk4 & hdisk0. your issue might be on the I/O area not memory/cpu --
i am currently looking at my system and the output of my vmstat (pi/po) are consistently none zero --and my lsps -s is 65% from the output you submitted your pi/po looks consistently zero's.
check your hdisk4 & hdisk0 and see what you can spread to other less busy disk...

 
Hi,

Yes i know about disk 4&0...
There are the rede-log file's from Oracle... so with 1000 users those file's are continue updated...
And so spread is no option because another disk will olso be very busy....

Or do you know another solution to spread redo-logs from oracle...

The problem with lsps -s is that before 1st of Augustus the paging space was always around 0% and later it grows to the 16%...
The problem is than that it will swap memory to disk and that is very slow ofcourse.. so we try not to swap memory to disk... (ofcourse work-memory..)

In the mean time we have change some vmtune parameters and we have no paging space uses anymore... so that problem is fixed... but the load of the machine is still very high...

or is our network the bottleneck?....

the last output of the commands....:

# /usr/samples/kernel/vmtune
vmtune: current values:
-p -P -r -R -f -F -N -W
minperm maxperm minpgahead maxpgahead minfree maxfree pd_npages maxrandwrt
629138 3145694 2 16 960 992 524288 0

-M -w -k -c -b -B -u -l -d
maxpin npswarn npskill numclust numfsbufs hd_pbuf_cnt lvm_bufcnt lrubucket defps
5062603 49152 12288 1 93 176 9 131072 1

-s -n -S -L -g -h
sync_release_ilock nokilluid v_pinshm lgpg_regions lgpg_size strict_maxperm
1 0 0 0 0 0

-t
maxclient
3145694

PTA balance threshold percentage = 50.0%

number of valid memory pages = 6291389 maxperm=50.0% of real memory
maximum pinable=80.5% of real memory minperm=10.0% of real memory
number of file memory pages = 4228708 numperm=67.2% of real memory

number of compressed memory pages = 0 compressed=0.0% of real memory
number of client memory pages = 2649723 numclient=42.1% of real memory
# of remote pgs sched-pageout = 0 maxclient=50.0% of real memory

# lsps -s
Total Paging Space Percent Used
6144MB 1%


# netstat -v
-------------------------------------------------------------
ETHERNET STATISTICS (ent0) :
Device Type: Gigabit Ethernet-SX PCI Adapter (14100401)
Hardware Address: 00:02:55:9a:32:f4
Elapsed Time: 179 days 14 hours 36 minutes 11 seconds

Transmit Statistics: Receive Statistics:
-------------------- -------------------
Packets: 42491853404 Packets: 3905800939
Bytes: 27017210420280 Bytes: 44658796361820
Interrupts: 442093988 Interrupts: 22943556388
Transmit Errors: 0 Receive Errors: 0
Packets Dropped: 0 Packets Dropped: 0
Bad Packets: 0
Max Packets on S/W Transmit Queue: 267
S/W Transmit Queue Overflow: 0
Current S/W+H/W Transmit Queue Length: 40

Broadcast Packets: 50837 Broadcast Packets: 37649860
Multicast Packets: 2 Multicast Packets: 0
No Carrier Sense: 0 CRC Errors: 0
DMA Underrun: 0 DMA Overrun: 0
Lost CTS Errors: 0 Alignment Errors: 0
Max Collision Errors: 0 No Resource Errors: 0
Late Collision Errors: 0 Receive Collision Errors: 0
Deferred: 0 Packet Too Short Errors: 0
SQE Test: 0 Packet Too Long Errors: 0
Timeout Errors: 0 Packets Discarded by Adapter: 0
Single Collision Count: 0 Receiver Start Count: 0
Multiple Collision Count: 0
Current HW Transmit Queue Length: 40

General Statistics:
-------------------
No mbuf Errors: 0
Adapter Reset Count: 0
Adapter Data Rate: 2000
Driver Flags: Up Broadcast Running
Simplex AlternateAddress 64BitSupport
PrivateSegment DataRateSet

Adapter Specific Statistics:
----------------------------
Additional Driver Flags: Autonegotiate
Entries to transmit timeout routine: 0
Firmware Level: 12.4.17
Transmit and Receive Flow Control Status: Enabled
Link Status: Up
Autonegotiation: Enabled
Media Speed Running: 1000 Mbps Full Duplex
-------------------------------------------------------------

 
You have hdisk0 and 4 which are hammered with redo logs. But your "paging" space is also on hdisk0.

Paging is done in round robin fashion. So anything that has heavy activity should _never_ be put on the same disk as a paging space.

If you moved those redo logs to any disk that didn't have a paging space on it, you would see some resolution.

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top