Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Ghost in my Server

Status
Not open for further replies.

khalidaaa

Technical User
Jan 19, 2006
2,323
BH
Hi all,

My server (LPAR on P5 570 machine) gets shutdowned by itself and below is the errpt -a for the incident.

There is a cluster HACMP running on the system and this is the primary node of the cluster. So i came today and i found out that the standby node is having the shared (oravg from SAN) volume.

Any help would be appreciated.

Code:
---------------------------------------------------------------------------
LABEL:          REBOOT_ID
IDENTIFIER:     2BFA76F6
Date/Time:       Tue Jun  6 02:13:39 SAUST 2006
Sequence Number: 403
Machine Id:      00C5C1EB4C00
Node Id:         localhost
Class:           S
Type:            TEMP
Resource Name:   SYSPROC         

Description
SYSTEM SHUTDOWN BY USER

Probable Causes
SYSTEM SHUTDOWN

Detail Data
USER ID
           0
0=SOFT IPL 1=HALT 2=TIME REBOOT
           1
TIME TO REBOOT (FOR TIMED REBOOT ONLY)
           0
---------------------------------------------------------------------------
LABEL:          ERRLOG_ON
IDENTIFIER:     9DBCFDEE

Date/Time:       Tue Jun  6 15:02:08 SAUST 2006
Sequence Number: 402
Machine Id:      00C5C1EB4C00
Node Id:         localhost
Class:           O
Type:            TEMP
Resource Name:   errdemon        

Description
ERROR LOGGING TURNED ON

Probable Causes
ERRDEMON STARTED AUTOMATICALLY

User Causes
/USR/LIB/ERRDEMON COMMAND

        Recommended Actions
        NONE

---------------------------------------------------------------------------
LABEL:          TS_NIM_ERROR_STUCK_
IDENTIFIER:     864D2CE3
Date/Time:       Tue Jun  6 02:13:36 SAUST 2006
Sequence Number: 401
Machine Id:      00C5C1EB4C00
Node Id:         s2oraplp
Class:           S
Type:            PERM
Resource Name:   topsvcs         

Description
NIM thread blocked

Probable Causes
A thread in a Topology Services Network Interface Module (NIM) process
was blocked
Topology Services NIM process cannot get timely access to CPU

User Causes
Excessive memory consumption is causing high memory contention
Excessive disk I/O is causing high memory contention

        Recommended Actions
        Examine I/O and memory activity on the system
        Reduce load on the system
        Tune virtual memory parameters
        Call IBM Service if problem persists

Failure Causes
Excessive virtual memory activity prevents NIM from making progress
Excessive disk I/O traffic is interfering with paging I/O

        Recommended Actions
        Examine I/O and memory activity on the system
        Reduce load on the system
        Tune virtual memory parameters
        Call IBM Service if problem persists

Detail Data
DETECTING MODULE
rsct,nim_control.C,1.39.1.2,5492              
ERROR ID 
6XnGH40Ue9V2/LWT/T4U1/0...................
REFERENCE CODE
                                          
Thread which was blocked
send thread
Interval in seconds during which process was blocked
          35
Interface name
rhdisk1
---------------------------------------------------------------------------
LABEL:          OPMSG
IDENTIFIER:     AA8AB241

Date/Time:       Tue Jun  6 02:13:33 SAUST 2006
Sequence Number: 400
Machine Id:      00C5C1EB4C00
Node Id:         s2oraplp
Class:           O
Type:            TEMP
Resource Name:   OPERATOR        

Description
OPERATOR NOTIFICATION

User Causes
ERRLOGGER COMMAND

        Recommended Actions
        REVIEW DETAILED DATA

Detail Data
MESSAGE FROM ERRLOGGER COMMAND
clexit.rc : Unexpected termination of clstrmgrES
---------------------------------------------------------------------------
LABEL:          SRC_RSTRT
IDENTIFIER:     BA431EB7

Date/Time:       Tue Jun  6 02:13:33 SAUST 2006
Sequence Number: 399
Machine Id:      00C5C1EB4C00
Node Id:         s2oraplp
Class:           S
Type:            PERM
Resource Name:   SRC             

Description
SOFTWARE PROGRAM ERROR

Probable Causes
APPLICATION PROGRAM

Failure Causes
SOFTWARE PROGRAM

        Recommended Actions
        VERIFY SUBSYSTEM RESTARTED AUTOMATICALLY

Detail Data
SYMPTOM CODE
           0
SOFTWARE ERROR CODE
       -9035
ERROR CODE
           0
DETECTING MODULE
'srchevn.c'@line:'217'
FAILING MODULE
emsvcs
---------------------------------------------------------------------------
LABEL:          SRC_SVKO
IDENTIFIER:     BC3BE5A3

Date/Time:       Tue Jun  6 02:13:33 SAUST 2006
Sequence Number: 398
Machine Id:      00C5C1EB4C00
Node Id:         s2oraplp
Class:           S
Type:            PERM
Resource Name:   SRC             

Description
SOFTWARE PROGRAM ERROR

Probable Causes
APPLICATION PROGRAM

Failure Causes
SOFTWARE PROGRAM

        Recommended Actions
        MANUALLY RESTART SUBSYSTEM IF NEEDED

Detail Data
SYMPTOM CODE
        1024
SOFTWARE ERROR CODE
       -9017
ERROR CODE
           0
DETECTING MODULE
'srchevn.c'@line:'350'
FAILING MODULE
clstrmgrES
---------------------------------------------------------------------------
LABEL:          HA002_ER
IDENTIFIER:     12081DC6
Date/Time:       Tue Jun  6 02:13:33 SAUST 2006
Sequence Number: 397
Machine Id:      00C5C1EB4C00
Node Id:         s2oraplp
Class:           S
Type:            PERM
Resource Name:   haemd           

Description
SOFTWARE PROGRAM ERROR

Probable Causes
SUBSYSTEM

Failure Causes
SUBSYSTEM

        Recommended Actions
        REPORT DETAILED DATA
        CONTACT APPROPRIATE SERVICE REPRESENTATIVE

Detail Data
DETECTING MODULE
LPP=PSSP,Fn=emd_gsi.c,SID=1.4.1.36,L#=1361,                                     
DIAGNOSTIC EXPLANATION
haemd: 2521-032 Cannot dispatch group services (1).

---------------------------------------------------------------------------
LABEL:          SRC_SVKO
IDENTIFIER:     BC3BE5A3

Date/Time:       Tue Jun  6 02:13:33 SAUST 2006
Sequence Number: 396
Machine Id:      00C5C1EB4C00
Node Id:         s2oraplp
Class:           S
Type:            PERM
Resource Name:   SRC             

Description
SOFTWARE PROGRAM ERROR

Probable Causes
APPLICATION PROGRAM

Failure Causes
SOFTWARE PROGRAM

        Recommended Actions
        MANUALLY RESTART SUBSYSTEM IF NEEDED

Detail Data
SYMPTOM CODE
        2560
SOFTWARE ERROR CODE
       -9017
ERROR CODE
           0
DETECTING MODULE
'srchevn.c'@line:'350'
FAILING MODULE
grpsvcs
---------------------------------------------------------------------------
LABEL:          GS_DOM_MERGE_ER
IDENTIFIER:     9DEC29E1

Date/Time:       Tue Jun  6 02:13:33 SAUST 2006
Sequence Number: 395
Machine Id:      00C5C1EB4C00
Node Id:         s2oraplp
Class:           O
Type:            PERM
Resource Name:   grpsvcs         

Description
Group Services daemon exit to merge domains

Probable Causes
Network between two node groups has repaired

Failure Causes
Network communication has been blocked.
Topology Services has been partitioned.

        Recommended Actions
        Check the network connection.
Check the Topology Services.
Verify that Group Services daemon has been restarted
Call IBM Service if problem persists

Detail Data
DETECTING MODULE
RSCT,NS.C,1.107.1.35,4370                     
ERROR ID 
6Vb0vR0Re9V2/iRM/T4U1/0...................
REFERENCE CODE
                                          
DIAGNOSTIC EXPLANATION
The master requests to dissolve my domain because of the merge with other domain 1.9
---------------------------------------------------------------------------

Regards,
Khalid
 
Are you heatbeating on the disk. It looks as though you've got some very heavy I/O

Mike

"A foolproof method for sculpting an elephant: first, get a huge block of marble, then you chip away everything that doesn't look like an elephant."

 
HI ,
also have a look at /tmp/hacmp.out are there any errors. The 1st entry reboot id indicated system has been shutdown by the user , check cron for any scheduled jons to reboot machine.

The clstrmgrES daemon also died unexpectedly , check hacmp.out log files and cluster.logs , normally if the clstrmgr daemon dies or killed the cluster will failover over. If resources are low on the server a failover can occur i.e. low on memory,cpu,swap space alot of i/o eventually causing the server to "slow down and crash"
 
well, i don't know why this is happening???

mrn, yes i think the cluster has been setup to do heartbeating on disk i beleive!!! how can i check this?

DSMARWAY, i don't know why this happened at 2 AM in the morning??? nothing is running on the server that time???

here is /tmp/hacmp.out output:

Code:
                        HACMP Event Summary
Event: /usr/es/sbin/cluster/events/check_for_site_down s1orapls 
Start time: Tue Jun  6 02:13:11 2006

End time: Tue Jun  6 02:13:11 2006

Action:         Resource:                       Script Name:
----------------------------------------------------------------------------
No resources changed as a result of this event
----------------------------------------------------------------------------

Jun  6 02:13:12 EVENT START: node_down s1orapls

:node_down[79] [[ high = high ]]
:node_down[79] version=1.49
:node_down[80] :node_down[80] cl_get_path
HA_DIR=es
:node_down[82] export NODENAME=s1orapls
:node_down[83] export PARAM=
:node_down[85] UPDATESTATDFILE=/usr/es/sbin/cluster/etc/updatestatd
:node_down[94] STATUS=0
:node_down[96] [[ -z  ]]
:node_down[97] EMULATE=REAL
:node_down[100] set -u
:node_down[102] ((  1 < 1  ))
:node_down[107] rm -f /tmp/.RPCLOCKDSTOPPED
:node_down[108] rm -f /usr/es/sbin/cluster/etc/updatestatd
:node_down[110] [[  = forced ]]
:node_down[128] UPDATESTATD=0
:node_down[129] export UPDATESTATD
:node_down[134] [[ FALSE = FALSE ]]
:node_down[141] set -a
:node_down[142] clsetenvgrp s1orapls node_down
:clsetenvgrp[50] [[ high = high ]]
:clsetenvgrp[50] version=1.16
:clsetenvgrp[52] usingVer=clSetenvgrp
:clsetenvgrp[57] clSetenvgrp s1orapls node_down
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[58] exit 0
:node_down[142] eval FORCEDOWN_GROUPS="" RESOURCE_GROUPS="" HOMELESS_GROUPS="" HOMELESS_FOLLOWER_GROUPS="" ERRSTATE_GROUPS="" 
PRINCIPAL_ACTIONS="" ASSOCIATE_ACTIONS="" AUXILLIARY_ACTIONS=""
:node_down[142] FORCEDOWN_GROUPS= RESOURCE_GROUPS= HOMELESS_GROUPS= HOMELESS_FOLLOWER_GROUPS= ERRSTATE_GROUPS= PRINCIPAL_ACTIO
NS= ASSOCIATE_ACTIONS= AUXILLIARY_ACTIONS=
:node_down[143] RC=0
:node_down[144] set +a
:node_down[145] ((  0 != 0  ))
:node_down[157] [[ FALSE = FALSE ]]
:node_down[159] process_resources
:process_resources[2122] [[ high = high ]]
:process_resources[2122] version=1.84
:process_resources[2123] :process_resources[2123] cl_get_path
HA_DIR=es
:process_resources[2125] STATUS=0
:process_resources[2126] sddsrv_off=FALSE
:process_resources[2128] [ ! -n  ]
:process_resources[2130] EMULATE=REAL
:process_resources[2133] true
:process_resources[2135] set -a
:process_resources[2138] clRGPA
:clRGPA[49] [[ high = high ]]
:clRGPA[49] version=1.16
:clRGPA[51] usingVer=clrgpa
:clRGPA[56] clrgpa
:clRGPA[57] exit 0
:process_resources[2138] eval JOB_TYPE=SYNC_VGS ACTION=ACQUIRE VOLUME_GROUPS="oravg" RESOURCE_GROUPS="ORAPL_group "
:process_resources[2138] JOB_TYPE=SYNC_VGS ACTION=ACQUIRE VOLUME_GROUPS=oravg RESOURCE_GROUPS=ORAPL_group 
:process_resources[2140] RC=0
:process_resources[2141] set +a
:process_resources[2143] [ 0 -ne 0 ]
:process_resources[2196] export GROUPNAME=ORAPL_group 
ORAPL_group :process_resources[2196] [[ ACQUIRE = ACQUIRE ]]
ORAPL_group :process_resources[2198] sync_volume_groups
ORAPL_group :process_resources[3] STAT=0
ORAPL_group:process_resources[6] export GROUPNAME
ORAPL_group:process_resources[8] get_list_head oravg
ORAPL_group:process_resources[3] read listhead listtail
ORAPL_group:process_resources[3] IFS=:
ORAPL_group:process_resources[8] read LIST_OF_VOLUME_GROUPS_FOR_RG
ORAPL_group:process_resources[3] echo oravg
ORAPL_group:process_resources[4] tr ,  
ORAPL_group:process_resources[4] echo oravg
ORAPL_group:process_resources[9] read VOLUME_GROUPS
ORAPL_group:process_resources[9] get_list_tail oravg
ORAPL_group:process_resources[3] echo oravg
ORAPL_group:process_resources[3] read listhead listtail
ORAPL_group:process_resources[3] IFS=:
ORAPL_group:process_resources[4] echo
ORAPL_group:process_resources[14] sort
ORAPL_group:process_resources[14] lsvg -L -o
ORAPL_group:process_resources[14] ORAPL_group:process_resources[14] 1> /tmp/lsvg.out.991436
2> /tmp/lsvg.err
ORAPL_group:process_resources[14] echo oravg
ORAPL_group:process_resources[14] tr   \n
ORAPL_group:process_resources[14] comm -12 /tmp/lsvg.out.991436 -
ORAPL_group:process_resources[14] sort
ORAPL_group:process_resources[18] cl_sync_vgs oravg
ORAPL_group:process_resources[14] [[ -s /tmp/lsvg.err ]]
ORAPL_group:process_resources[24] rm -f /tmp/lsvg.out.991436 /tmp/lsvg.err
ORAPL_group:process_resources[27] return 0
ORAPL_group:process_resources[2133] true
ORAPL_group:process_resources[2135] set -a
ORAPL_group:process_resources[2138] clRGPA
ORAPL_group:cl_sync_vgs[150] [[ high == high ]]
ORAPL_group:cl_sync_vgs[150] version=1.12
ORAPL_group:cl_sync_vgs[152] (( 1 == 0 ))
ORAPL_group:cl_sync_vgs[160] check_sync oravg
ORAPL_group:cl_sync_vgs[4] typeset vg_name
ORAPL_group:cl_sync_vgs[5] typeset vgid
ORAPL_group:cl_sync_vgs[6] typeset disklist
ORAPL_group:cl_sync_vgs[7] typeset lv_name
ORAPL_group:cl_sync_vgs[8] typeset -i stale_count
ORAPL_group:cl_sync_vgs[9] typeset -i mode
ORAPL_group:cl_sync_vgs[11] vg_name=oravg
ORAPL_group:cl_sync_vgs[12] disklist=''
ORAPL_group:cl_sync_vgs[14] getlvodm -v oravg
ORAPL_group:clRGPA[49] [[ high = high ]]
ORAPL_group:clRGPA[49] version=1.16
ORAPL_group:clRGPA[51] usingVer=clrgpa
ORAPL_group:clRGPA[56] clrgpa
ORAPL_group:cl_sync_vgs[14] vgid=00c5c1eb00004c000000010a3a545d8c
ORAPL_group:cl_sync_vgs[20] lsvg -L -p oravg
ORAPL_group:cl_sync_vgs[20] LANG=C
ORAPL_group:cl_sync_vgs[21] tail -n +3
ORAPL_group:clRGPA[57] exit 0
ORAPL_group:process_resources[2138] eval JOB_TYPE=NONE
ORAPL_group:process_resources[2138] JOB_TYPE=NONE
ORAPL_group:process_resources[2140] RC=0
ORAPL_group:process_resources[2141] set +a
ORAPL_group:process_resources[2143] [ 0 -ne 0 ]
ORAPL_group:process_resources[2422] break
ORAPL_group:process_resources[2433] [[ FALSE = TRUE ]]
ORAPL_group:process_resources[2439] exit 0
:node_down[167] [ -f /usr/es/sbin/cluster/etc/updatestatd ]
:node_down[173] [[ FALSE = FALSE ]]
:node_down[208] [ REAL = EMUL ]
:node_down[213] [ -f /tmp/.RPCLOCKDSTOPPED ]
:node_down[235] process_resources FENCE
ORAPL_group:cl_sync_vgs[22] read pv_name pv_state rest
ORAPL_group:cl_sync_vgs[24] [[ active == removed ]]
ORAPL_group:cl_sync_vgs[24] [[ active == missing ]]
ORAPL_group:cl_sync_vgs[22] read pv_name pv_state rest
ORAPL_group:cl_sync_vgs[33] [[ -n '' ]]
ORAPL_group:cl_sync_vgs[63] cut -f2- '-d '
ORAPL_group:cl_sync_vgs[63] lqueryvg -g 00c5c1eb00004c000000010a3a545d8c -L
:process_resources[2122] [[ high = high ]]
:process_resources[2122] version=1.84
:process_resources[2123] :process_resources[2123] cl_get_path
ORAPL_group:cl_sync_vgs[68] read lv_name stale_count
ORAPL_group:cl_sync_vgs[69] (( 1 != 1 ))
ORAPL_group:cl_sync_vgs[68] read lv_name stale_count
ORAPL_group:cl_sync_vgs[69] (( 1 != 1 ))
ORAPL_group:cl_sync_vgs[68] read lv_name stale_count
ORAPL_group:cl_sync_vgs[69] (( 1 != 1 ))
ORAPL_group:cl_sync_vgs[68] read lv_name stale_count
HA_DIR=es
:process_resources[2125] STATUS=0
:process_resources[2126] sddsrv_off=FALSE
:process_resources[2128] [ ! -n  ]
:process_resources[2130] EMULATE=REAL
:process_resources[2133] true
:process_resources[2135] set -a
:process_resources[2138] clRGPA FENCE
:clRGPA[49] [[ high = high ]]
:clRGPA[49] version=1.16
:clRGPA[51] usingVer=clrgpa
:clRGPA[56] clrgpa FENCE
:clRGPA[57] exit 0
:process_resources[2138] eval JOB_TYPE=NONE
:process_resources[2138] JOB_TYPE=NONE
:process_resources[2140] RC=0
:process_resources[2141] set +a
:process_resources[2143] [ 0 -ne 0 ]
:process_resources[2422] break
:process_resources[2433] [[ FALSE = TRUE ]]
:process_resources[2439] exit 0
:node_down[247] [[ s1orapls = s2oraplp ]]
:node_down[264] [[ s1orapls = s2oraplp ]]
:node_down[276] [[ s1orapls = s2oraplp ]]
:node_down[289] exit 0
Jun  6 02:13:12 EVENT COMPLETED: node_down s1orapls 0

                        HACMP Event Summary
Event: node_down s1orapls 
Start time: Tue Jun  6 02:13:11 2006

End time: Tue Jun  6 02:13:13 2006

Action:         Resource:                       Script Name:
----------------------------------------------------------------------------
No resources changed as a result of this event
----------------------------------------------------------------------------

Jun  6 02:13:13 EVENT START: node_down_complete s1orapls

:node_down_complete[80] [[ high = high ]]
:node_down_complete[80] version=1.2.3.46
:node_down_complete[81] :node_down_complete[81] cl_get_path
HA_DIR=es
:node_down_complete[83] export NODENAME=s1orapls
:node_down_complete[84] export PARAM=
:node_down_complete[86] VSD_PROG=/usr/lpp/csd/bin/hacmp_vsd_down2
:node_down_complete[87] HPS_PROG=/usr/es/sbin/cluster/events/utils/cl_HPS_init
:node_down_complete[96] STATUS=0
:node_down_complete[98] [ ! -n  ]
:node_down_complete[100] EMULATE=REAL
:node_down_complete[103] set -u
:node_down_complete[105] [ 1 -lt 1 ]
:node_down_complete[111] [[  = forced ]]
:node_down_complete[133] [[ FALSE = FALSE ]]
:node_down_complete[141] set -a
:node_down_complete[142] clsetenvgrp s1orapls node_down_complete
:clsetenvgrp[50] [[ high = high ]]
:clsetenvgrp[50] version=1.16
:clsetenvgrp[52] usingVer=clSetenvgrp
:clsetenvgrp[57] clSetenvgrp s1orapls node_down_complete
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[58] exit 0
:node_down_complete[142] eval FORCEDOWN_GROUPS="" RESOURCE_GROUPS="" HOMELESS_GROUPS="" HOMELESS_FOLLOWER_GROUPS="" ERRSTATE_G
ROUPS="" PRINCIPAL_ACTIONS="" ASSOCIATE_ACTIONS="" AUXILLIARY_ACTIONS=""
:node_down_complete[142] FORCEDOWN_GROUPS= RESOURCE_GROUPS= HOMELESS_GROUPS= HOMELESS_FOLLOWER_GROUPS= ERRSTATE_GROUPS= PRINCI
PAL_ACTIONS= ASSOCIATE_ACTIONS= AUXILLIARY_ACTIONS=
:node_down_complete[143] RC=0
:node_down_complete[144] set +a
:node_down_complete[146] [ 0 -ne 0 ]
:node_down_complete[157] [[ FALSE = FALSE ]]
:node_down_complete[159] process_resources
:process_resources[2122] [[ high = high ]]
:process_resources[2122] version=1.84
:process_resources[2123] :process_resources[2123] cl_get_path
HA_DIR=es
:process_resources[2125] STATUS=0
:process_resources[2126] sddsrv_off=FALSE
:process_resources[2128] [ ! -n  ]
:process_resources[2130] EMULATE=REAL
:process_resources[2133] true
:process_resources[2135] set -a
:process_resources[2138] clRGPA
:clRGPA[49] [[ high = high ]]
:clRGPA[49] version=1.16
:clRGPA[51] usingVer=clrgpa
:clRGPA[56] clrgpa
:clRGPA[57] exit 0
:process_resources[2138] eval JOB_TYPE=NONE
:process_resources[2138] JOB_TYPE=NONE
:process_resources[2140] RC=0
:process_resources[2141] set +a
:process_resources[2143] [ 0 -ne 0 ]
:process_resources[2422] break
:process_resources[2433] [[ FALSE = TRUE ]]
:process_resources[2439] exit 0
:node_down_complete[160] [ 0 -ne 0 ]
:node_down_complete[170] [ -f /usr/lpp/csd/bin/hacmp_vsd_down2 ]
:node_down_complete[189] :node_down_complete[189] odmget -qnodename = s2oraplp HACMPadapter
:node_down_complete[189] grep type
:node_down_complete[189] grep hps
SP_SWITCH=
:node_down_complete[191] :node_down_complete[191] lscfg -v
:node_down_complete[191] :node_down_complete[191] grep css
LANG=C
:node_down_complete[191] awk { print $4 }
SWITCH_TYPE=
:node_down_complete[192] :node_down_complete[192] lscfg -v
:node_down_complete[192] LANG=C
:node_down_complete[192] awk { print $4 }
:node_down_complete[192] grep sn
FED_TYPE=
:node_down_complete[199] [ -n  -a -f /usr/es/sbin/cluster/events/utils/cl_HPS_init -a -z  ]
:node_down_complete[240] LOCALCOMP=N
:node_down_complete[244] [[ FALSE = FALSE ]]
:node_down_complete[282] [ s1orapls = s2oraplp ]
:node_down_complete[334] exit 0
Jun  6 02:13:14 EVENT COMPLETED: node_down_complete s1orapls 0

                        HACMP Event Summary
Event: node_down_complete s1orapls 
Start time: Tue Jun  6 02:13:13 2006

End time: Tue Jun  6 02:13:14 2006

Action:         Resource:                       Script Name:
----------------------------------------------------------------------------
No resources changed as a result of this event
----------------------------------------------------------------------------
+ [[ high = high ]]
+ version=1.2
+ + cl_get_path
HA_DIR=es
+ STATUS=0
+ set +u
+ [ ]
+ exit 0
                        HACMP Event Summary
Event: /usr/es/sbin/cluster/events/check_for_site_down_complete s1orapls 
Start time: Tue Jun  6 02:13:14 2006

End time: Tue Jun  6 02:13:14 2006

Action:         Resource:                       Script Name:
----------------------------------------------------------------------------
No resources changed as a result of this event
----------------------------------------------------------------------------
+ [[ high = high ]]
+ version=1.2
+ + cl_get_path
HA_DIR=es
+ STATUS=0
+ set +u
+ [ ]
+ exit 0

Jun  6 15:13:45 EVENT START: config_too_long 360 /usr/es/sbin/cluster/events/node_up.rp

:config_too_long[64] [[ high = high ]]
:config_too_long[64] version=1.11
:config_too_long[65] :config_too_long[65] cl_get_path
HA_DIR=es
:config_too_long[67] NUM_SECS=360
:config_too_long[68] EVENT=/usr/es/sbin/cluster/events/node_up.rp
:config_too_long[70] HOUR=3600
:config_too_long[71] THRESHOLD=5
:config_too_long[72] SLEEP_INTERVAL=1
:config_too_long[78] PERIOD=30
:config_too_long[81] set -u
:config_too_long[86] LOOPCNT=0
:config_too_long[87] MESSAGECNT=0
:config_too_long[88] :config_too_long[88] cllsclstr -c
:config_too_long[88] cut -d : -f2
:config_too_long[88] grep -v cname
CLUSTER=ORAPL
:config_too_long[89] TIME=360
:config_too_long[90] sleep_cntr=0
:config_too_long[95] [ -x /usr/lpp/ssp/bin/spget_syspar ]
WARNING: Cluster ORAPL has been running recovery program '/usr/es/sbin/cluster/events/node_up.rp' for 360 seconds. Please chec
k cluster status.
WARNING: Cluster ORAPL has been running recovery program '/usr/es/sbin/cluster/events/node_up.rp' for 390 seconds. Please chec
k cluster status.
WARNING: Cluster ORAPL has been running recovery program '/usr/es/sbin/cluster/events/node_up.rp' for 420 seconds. Please chec
k cluster status.
WARNING: Cluster ORAPL has been running recovery program '/usr/es/sbin/cluster/events/node_up.rp' for 450 seconds. Please chec
k cluster status.
WARNING: Cluster ORAPL has been running recovery program '/usr/es/sbin/cluster/events/node_up.rp' for 480 seconds. Please chec
k cluster status.
WARNING: Cluster ORAPL has been running recovery program '/usr/es/sbin/cluster/events/node_up.rp' for 540 seconds. Please chec
k cluster status.
WARNING: Cluster ORAPL has been running recovery program '/usr/es/sbin/cluster/events/node_up.rp' for 600 seconds. Please chec
k cluster status.
WARNING: Cluster ORAPL has been running recovery program '/usr/es/sbin/cluster/events/node_up.rp' for 660 seconds. Please chec
k cluster status.
WARNING: Cluster ORAPL has been running recovery program '/usr/es/sbin/cluster/events/node_up.rp' for 720 seconds. Please chec
k cluster status.
WARNING: Cluster ORAPL has been running recovery program '/usr/es/sbin/cluster/events/node_up.rp' for 780 seconds. Please chec
k cluster status.
WARNING: Cluster ORAPL has been running recovery program '/usr/es/sbin/cluster/events/node_up.rp' for 900 seconds. Please chec
k cluster status.
WARNING: Cluster ORAPL has been running recovery program '/usr/es/sbin/cluster/events/node_up.rp' for 1020 seconds. Please che
ck cluster status.
WARNING: Cluster ORAPL has been running recovery program '/usr/es/sbin/cluster/events/node_up.rp' for 1140 seconds. Please che
ck cluster status.
                        HACMP Event Summary
Event: /usr/es/sbin/cluster/events/check_for_site_up s2oraplp 
Start time: Tue Jun  6 15:07:47 2006

End time: Tue Jun  6 15:27:09 2006

Action:         Resource:                       Script Name:
----------------------------------------------------------------------------
No resources changed as a result of this event
----------------------------------------------------------------------------

The last action was me starting up the cluster after booting the server.

any clue?

regards,
Khalid
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top