Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

hacmp change IP addresses for all ip

Status
Not open for further replies.

alexia32

Technical User
Jul 31, 2007
156
NZ
Hi,

We have 2 move 2 cluster to another vlan so we need to change all the IP adresses on both cluster
Did anybody has already done it ? if so Can you give me the steps to follow exactly ??? please

Here what I think should be done:
- stop the cluster
- change the ip adresses via the console
- update /etc/hosts file
- start the cluster
- change the ip Label (topology) for all (service, standby, boot, heartbeat)

do we need to rebuild the resource group ? or modify something else ?


Thanks in advance
Al
 
Well i don't know much about the cluster so it would be better if you display the cluster configuration (cldisp -a)

But in general, Changing the IPs to new vlan doesn't require changing the cluster configuration (assuming same vlan design as the old one) because in the cluster configuration you usually store IP names (which is usually taken from /etc/hosts file)

But offcourse as you said you need to stop the cluster that's for sure!

One more place to look at is /usr/es/sbin/cluster/etc! In some cases you might find IPs listed in clhosts, clhosts.client, rhosts or rhosts.client! It should use names only but in case it doesn't you've checked this any way!
 
Hi,

I have change the ip adresses and modify/remove for some and recreate on hacmp the labels/network resources.
I have did the change and everything works fine, synchronisation, verifcation on both node etc... I have started hacmp on both nodes but the result of clstat is not what I wanted!
clstat show me on each node that hacmp is working (all UP) but for the other node it said everything Down
It looks like they don't communicate... Do you know how to fix this?
I have tried several things, I have checked all the files hosts, clhosts etc...
So any idea is helpfull

here result of clstat on both node:
clstat from node1:
clstat - HACMP Cluster Status Monitor
-------------------------------------

Cluster: mycluster

State: UP Nodes: 2
SubState: STABLE

Node: mynode1 State: UP
Interface: mynode1boot (1) Address: XX.XXX.XXX.XX
State: DOWN
Interface: mynode1stby (1) Address: XX.XXX.XXX.XX
State: UP
Interface: mynode1clhb (2) Address: XX.XXX.XXX.XX
State: UP
Interface: mynode1_diskX (0) Address: 0.0.0.0
State: UP
Interface: mynode1 (1) Address: XX.XXX.XXX.XX
State: UP
Resource Group: mucluster1 State: On line

Node: mynode2 State: DOWN
Interface: mynode2boot (1) Address: XX.XXX.XXX.XX
State: DOWN
Interface: mynode2stby (1) Address: XX.XXX.XXX.XX
State: DOWN
Interface: mynode2clhb (2) Address: XX.XXX.XXX.XX



------------------------------------------------------------------------
clstat from node2:
clstat - HACMP Cluster Status Monitor
-------------------------------------

Cluster: mycluster

State: UP Nodes: 2
SubState: STABLE

Node: mynode1 State: UP
Interface: mynode1boot (1) Address: XX.XXX.XXX.XX
State: DOWN
Interface: mynode1stby (1) Address: XX.XXX.XXX.XX
State: UP
Interface: mynode1clhb (2) Address: XX.XXX.XXX.XX
State: UP
Interface: mynode1_diskX (0) Address: 0.0.0.0
State: UP
Interface: mynode1 (1) Address: XX.XXX.XXX.XX
State: UP
Resource Group: mucluster1 State: On line

Node: mynode2 State: DOWN
Interface: mynode2boot (1) Address: XX.XXX.XXX.XX
State: DOWN
Interface: mynode2stby (1) Address: XX.XXX.XXX.XX
State: DOWN
Interface: mynode2clhb (2) Address: XX.XXX.XXX.XX


As you can see on node 2 it says hacmp DOWN but the all the services hacmp are up and running:
lssrc -g cluster
Subsystem Group PID Status
clsmuxpdES cluster 254004 active
clstrmgrES cluster 335942 active
clinfoES cluster 348398 active


Thanks in advance for any help!
Cheers
Al
 
Hi,

you can see also the result that cldump see on each node that ha is correctly up but say that the other node is down... but everything is fine as the synchro and verification finished with no error...
here result of cldump on each node
on node 1: I run /usr/es/sbin/cluster/utilities/cldump

Obtaining information via SNMP from Node: node1...

_____________________________________________________________________________
Cluster Name: myclusterd
Cluster State: UP
Cluster Substate: STABLE
_____________________________________________________________________________


Node Name: node1 State: UP

Network Name: net_diskhb1 State: UP

Address: Label: node1_diskX0 State: UP

Network Name: net_ether_01 State: UP

Address: XX.XXX.XX.XX Label: node1boot State: DOWN
Address: XX.XXX.XX.XX Label: node1stby State: UP
Address:XX.XXX.XX.XX Label: node1 State: UP

Network Name: net_ether_02 State: UP

Address: XX.XXX.XX.XX Label: node1clhb State: UP


Node Name: node2 State: DOWN

Network Name: net_diskhb1 State: DOWN


Network Name: net_ether_01 State: DOWN

Address: XX.XXX.XX.XX Label: node2boot State: DOWN
Address: XX.XXX.XX.XX Label: node2stby State: DOWN

Network Name: net_ether_02 State: DOWN

Address: XX.XXX.XX.XX Label: node2clhb State: DOWN



Cluster Name: myclusterd

Resource Group Name: mycluster1
Startup Policy: Online On Home Node Only
Fallover Policy: Fallover To Next Priority Node In The List
Fallback Policy: Never Fallback
Site Policy: ignore
Node State
--------------- ---------------
node1 ONLINE
node2 OFFLINE

Resource Group Name: mycluster2
Startup Policy: Online On Home Node Only
Fallover Policy: Fallover To Next Priority Node In The List
Fallback Policy: Never Fallback
Site Policy: ignore
Node State
--------------- ---------------
node2 OFFLINE
node1 OFFLINE


===================
on node 2 I run /usr/es/sbin/cluster/utilities/cldump

Obtaining information via SNMP from Node: node2...

_____________________________________________________________________________
Cluster Name: myclusterd
Cluster State: UP
Cluster Substate: STABLE
_____________________________________________________________________________


Node Name: node1 State: DOWN

Network Name: net_diskhb1 State: DOWN


Network Name: net_ether_01 State: DOWN

Address: XX.XXX.XX.XX Label: node1boot State: DOWN
Address: XX.XXX.XX.XX Label: node1stby State: DOWN

Network Name: net_ether_02 State: DOWN

Address: XX.XXX.XX.XX Label: node1clhb State: DOWN


Node Name: node2 State: UP

Network Name: net_diskhb1 State: UP

Address: Label: node2_diskX0 State: UP

Network Name: net_ether_01 State: UP

Address: XX.XXX.XX.XX Label: node2boot State: DOWN
Address: XX.XXX.XX.XX Label: node2stby State: UP
Address: XX.XXX.XX.XX Label: node2 State: UP

Network Name: net_ether_02 State: UP

Address: XX.XXX.XX.XX Label: node2clhb State: UP



Cluster Name: myclusterd

Resource Group Name: mycluster1
Startup Policy: Online On Home Node Only
Fallover Policy: Fallover To Next Priority Node In The List
Fallback Policy: Never Fallback
Site Policy: ignore
Node State
--------------- ---------------
node1 OFFLINE
node2 OFFLINE

Resource Group Name: mycluster2
Startup Policy: Online On Home Node Only
Fallover Policy: Fallover To Next Priority Node In The List
Fallback Policy: Never Fallback
Site Policy: ignore
Node State
--------------- ---------------
node2 ONLINE
node1 OFFLINE

Thanks in advance for any advise or idea evrything is welcome... I am still investigating
 
I can understand from the above that you have two resource groups mycluster and mycluster2 right? and you are using take over via replacement right? can you list the subnets that you are using and how many vlans? can you explain more about the cluster design? Is HACMP running on both nodes? Can you see the lvs or the disks associated to each resource group correctly?

It must be something with the network (as it always does) but to clear your configuration, let us review the design!

Regards,
Khalid
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top