Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

HACMP

Status
Not open for further replies.

nwade

MIS
Jul 17, 2004
5
US
We have just recently upgraded our P660 from AIX 4.3.3 to 5.1 and HA from 4.x to 5. We have been having some perfomance issues. Could someone give me a brief explanation of the "performance load" of running HA. Thanks
 
Are you sure that HACMP is causing your performance problems? what kind of performance problems are you seeing?



Jarrett
IBM Certified Systems Expert pSeries HACMP for AIX
IBM Certified Advanced Technical Expert for AIX 5L and pSeries
- AIX 5L Systems Administration
- AIX 5L Performance and System Tuning
- p690 Technical Support
- pSeries HACMP for AIX
 
Thanks for the response. What happened is that we were having issues mostly pointing to I/O and we made a change and brought up the fast cache option on our SSA. There is evidently a limit of 2 loops when using fast cache so basically we had to back off our HA since to share all the drives we needed four. So... our performance has improved which we think is related to the fast cache. But since we made two changes (came up without HA) some of our peers think it could as well been a problem with HA since that was one of the last things we changed prior to going into the perfomance hole to start with. So how do I find out what HA doing in the back ground and how that could impact performance.
 
Here are the following processes that HACMP uses....

/usr/es/sbin/cluster/clsmuxpd
/usr/es/sbin/cluster/clinfo
/usr/es/sbin/cluster/clstrmgr

If your performance increased dramatically by utilizing the fast write cache than I would think your performance problems deal more with heavy disk writes than HACMP. Is your heartbeats for HACMP configured for serial or across the SSA adapters?

Do you have vmtune properly tuned for memory?

Jarrett
IBM Certified Systems Expert pSeries HACMP for AIX
IBM Certified Advanced Technical Expert for AIX 5L and pSeries
- AIX 5L Systems Administration
- AIX 5L Performance and System Tuning
- p690 Technical Support
- pSeries HACMP for AIX
 
Thanks for the info. We don't seem to have "vmtune" but did a vmstat 2. Here it is:
/: # vmstat 2
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
2 2 540626 13690 0 0 0 3 9 0 958 10310 3057 13 13 68 5
9 1 540713 13583 0 0 0 0 0 0 2265 27346 5846 48 20 12 20
7 0 539515 14780 0 0 1 0 0 0 1066 15482 3258 27 31 37 6
1 0 539512 14781 0 0 0 0 0 0 585 6897 2510 26 10 62 2
4 1 541181 13109 0 0 0 0 0 0 731 15295 2740 29 18 50 3
10 1 540366 13913 0 0 0 0 0 0 1528 18566 4397 56 20 10 14
5 0 539692 14582 0 0 0 0 0 0 1379 20954 3866 67 18 8 7
3 3 539614 14646 0 0 0 0 0 0 1837 27850 5259 51 26 10 12
1 2 538771 17390 0 0 0 0 0 0 2175 24906 5830 44 20 16 20
3 1 539587 16566 0 0 0 0 0 0 1113 24812 3584 38 24 30 8
 
Ok... not sure if the output of the current vmstat reflects the whole picture (was this run during a slow period). You are correct in that it appears that there is a lot of cpu activity and low disk usage. Could you submit the following.

lsps -s
iostat 2
/usr/samples/kernel/vmtune (you said you might not have vmtune installed)
ps aux | head -10

What type of application are you running? database, etc...
 
Thanks again. I tried to find the vmtume but can't find it. How do I get it? As you guessed, we are at a slow time right now. The system supports an HL7 application interface engine. Here's the other stuff.
^C/home/hci: $ cd /usr/samples/kernel/vmtune
ksh: /usr/samples/kernel/vmtune: 0403-037 The specified path name is not a directory.
/home/hci: $ cd /usr
/usr: $ cd /samples
ksh: /samples: not found.
/usr: $ ls
CYEagent asagent es linux pub sysv
HTTPServer asagentd etc local samples tivoli
IMNSearch bin games lost+found sbin tmp
TT_DB ccs include lpd share ucb
X11R6 dict java130 lpp spool usg
adm doc jdk_base man src websm
agent.cfg docsearch lbin netscape ssa
agent.cfg.old dt lib opt swlag
aix emgrdata libarclic98_api.so perfagent sys
/usr: $ ps aux | head -10
USER PID %CPU %MEM SZ RSS TTY STAT STIME TIME COMMAND
root 516 18.8 1.0 12 22820 - A Jul 14 6209:37 wait
root 774 18.2 1.0 12 22820 - A Jul 14 6024:59 wait
root 1032 18.2 1.0 12 22820 - A Jul 14 6024:46 wait
root 1290 18.2 1.0 12 22820 - A Jul 14 6022:46 wait
hci 72820 5.6 2.0 48392 46928 - A 12:44:09 40:05 /hci/root5.3/qdx
5
hci 55256 5.0 1.0 38944 23364 - A Jul 14 1663:31 /hci/root5.3/q
dx5
hci 40036 4.8 1.0 39948 37228 - A Jul 14 1569:03 /hci/root5.3/q
/usr: $
/home/hci: $ lsps -s
Total Paging Space Percent Used
4288MB 21%
/home/hci: $ iostat 2

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 35.5 13.3 13.5 67.8 5.3

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.7 7.8 1.3 533511 3330053
hdisk1 0.7 6.9 1.1 96606 3330053
hdisk3 0.0 2.9 0.1 1420141 11166
hdisk10 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk6 0.0 0.0 0.0 0 0
hdisk8 0.0 0.0 0.0 0 0
hdisk11 0.0 0.0 0.0 0 0
hdisk9 0.0 0.0 0.0 0 0
hdisk2 0.0 2.4 0.1 1175589 8130
hdisk7 0.0 0.0 0.0 0 0
hdisk12 0.0 0.0 0.0 0 0
hdisk13 0.0 0.0 0.0 0 0
hdisk14 0.0 0.0 0.0 0 0
hdisk15 0.0 0.0 0.0 0 0
hdisk16 0.0 2.6 0.0 1297197 6361
hdisk4 0.0 0.0 0.0 0 0
hdisk17 0.0 2.5 0.0 1229613 6361
hdisk19 9.0 151.2 41.9 341166 74635145
hdisk20 16.8 421.3 113.6 278082 208678373
hdisk21 17.1 421.1 113.4 150426 208678373
hdisk22 7.7 143.9 39.2 250054 71111243
hdisk24 9.2 181.3 51.1 187402 89741107
hdisk23 8.0 143.8 39.2 181010 71111243
hdisk25 9.5 181.2 51.1 103962 89741107
cd0 0.0 0.0 0.0 0 0
hdisk18 8.7 150.9 41.8 212522 74635145

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 963.8 45.2 29.8 6.4 18.6

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
hdisk10 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk6 0.0 0.0 0.0 0 0
hdisk8 0.0 0.0 0.0 0 0
hdisk11 0.0 0.0 0.0 0 0
hdisk9 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk12 0.0 0.0 0.0 0 0
hdisk13 0.0 0.0 0.0 0 0
hdisk14 0.0 0.0 0.0 0 0
hdisk15 0.0 0.0 0.0 0 0
hdisk16 0.0 0.0 0.0 0 0
hdisk4 0.0 0.0 0.0 0 0
hdisk17 0.0 0.0 0.0 0 0
hdisk19 46.4 633.7 183.8 0 1269
hdisk20 79.9 1651.9 467.4 0 3308
hdisk21 81.9 1651.9 466.4 0 3308
hdisk22 69.4 1323.3 375.0 0 2650
hdisk24 14.5 155.3 48.9 0 311
hdisk23 73.9 1323.3 374.5 0 2650
hdisk25 15.5 155.3 48.9 0 311
cd0 0.0 0.0 0.0 0 0
hdisk18 45.9 633.7 183.8 0 1269

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 966.0 54.4 21.8 6.9 17.0

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 6.5 43.0 9.5 0 86
hdisk1 5.0 43.0 9.5 0 86
hdisk3 0.0 0.0 0.0 0 0
hdisk10 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk6 0.0 0.0 0.0 0 0
hdisk8 0.0 0.0 0.0 0 0
hdisk11 0.0 0.0 0.0 0 0
hdisk9 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk12 0.0 0.0 0.0 0 0
hdisk13 0.0 0.0 0.0 0 0
hdisk14 0.0 0.0 0.0 0 0
hdisk15 0.0 0.0 0.0 0 0
hdisk16 0.0 0.0 0.0 0 0
hdisk4 0.0 0.0 0.0 0 0
hdisk17 0.0 0.0 0.0 0 0
hdisk19 33.0 446.5 130.0 0 893
hdisk20 79.0 1763.5 485.0 0 3527
hdisk21 80.0 1763.5 484.5 0 3527
hdisk22 34.0 500.5 144.0 0 1001
hdisk24 53.0 965.0 280.0 0 1930
hdisk23 35.5 500.5 144.5 0 1001
hdisk25 56.5 965.0 280.0 0 1930
cd0 0.0 0.0 0.0 0 0
hdisk18 32.0 446.5 130.0 0 893

tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 965.0 39.6 37.8 5.0 17.6

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
hdisk10 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
hdisk6 0.0 0.0 0.0 0 0
hdisk8 0.0 0.0 0.0 0 0
hdisk11 0.0 0.0 0.0 0 0
hdisk9 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk7 0.0 0.0 0.0 0 0
hdisk12 0.0 0.0 0.0 0 0
hdisk13 0.0 0.0 0.0 0 0
hdisk14 0.0 0.0 0.0 0 0
hdisk15 0.0 0.0 0.0 0 0
hdisk16 0.0 0.0 0.0 0 0
hdisk4 0.0 0.0 0.0 0 0
hdisk17 0.0 0.0 0.0 0 0
hdisk19 36.5 504.5 142.5 0 1009
hdisk20 69.5 1366.0 383.5 0 2732
hdisk21 70.0 1366.0 381.5 0 2732
hdisk22 32.5 470.5 140.0 0 941
hdisk24 70.5 1219.5 352.5 0 2439
hdisk23 37.0 470.5 140.0 0 941
hdisk25 72.5 1219.5 351.5 0 2439
cd0 0.0 0.0 0.0 0 0
hdisk18 35.5 504.5 142.0 0 1009
 
At this point it is hard to tell exactly where your bottleneck is but I suspect that its either your IO or your memory is not tuned. You should installed bos.adt.samples fileset which will give you the vmtune option. This does not require a reboot.

Since there is about 1 Gb swapped to paging, we can help that by tuning memory with vmtune (Lower the maxmperm and minperm within vmtune). I suspect that during your slow periods, the fre column in vmstat drops dramatically but you will need to run it during the slow periods to confirm. If so, we can raise the free list with vmtune to increase the application response times.

Once you get vmtune installed, run /usr/samples/kernel/vmtune and also /usr/samples/kernel/vmtune -a



 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top