Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

hacmp topology - IP Label issue... 1

Status
Not open for further replies.

alexia32

Technical User
Jul 31, 2007
156
NZ
Hi,

I am trying to configure hacmp on 2 vio clients. I have virtual IP setup and san disk shared (I have configrued everything on my vio servers). I have configured the network on my vio clients.
But when I configure topology network it's not really working
Well I need some help please...
I am trying this for so many days I don;t know what to see anymore...

here what I have: 2 virtual clients each same aix level 5.3 with hacmp installed

each node have 3 virtual network adapter for IP service, IP boot and IP standby on the same subnet
and one network card on another vlan (private) for cluster heartbeat

node1 and node2 are configured the same with differents ip of course but boot, service and stabdy on same vlan/subnet and clusterheartbeat on anoter vlan private :
en0 (virtual adapter): ip service
en1 : ip cluster heartbeat on another vlan
en3 (virtal adapter): ip boot
en4 (virtual adapter): ip standby

I have defined one lun on a vg concurrent to use for disk heartbeat

So here my cluster topology,I have a problem configuring the IP Label!! don't know why so may be you will help

I am just at the begining:
# /usr/es/sbin/cluster/utilities/cltopinfo
Cluster Name: clustertst1
Cluster Connection Authentication Mode: Standard
Cluster Message Authentication Mode: None
Cluster Message Encryption: None
Use Persistent Labels for Communication: No
There are 2 node(s) and 3 network(s) defined

NODE node01:
Network net_diskhb_01
node01_disk2 /dev/hdisk2
Network net_ether_01
node02 10.60.34.73
node01 10.60.34.70
node01stby 10.60.34.72
node01boot 10.60.34.71
Network net_ether_02
node01clhb 192.167.100.40

NODE node02:
Network net_diskhb_01
node02_disk2 /dev/hdisk2
Network net_ether_01
node02 10.60.34.73
node01 10.60.34.70
node02stby 10.60.34.75
node02boot 10.60.34.74
Network net_ether_02
node02clhb 192.167.100.41

No resource groups defined


As you can see when I tried to add via smitty hamp under resource -> Configure HACMP Service IP Labels/Addresses -> Add a Service IP Label/Address
I select multinodes and then it appears on both node!!!
I tried to add it differently, use persisent IP Label also on topology but after verification it failed

Well I still don;t have my cluster done on the vio clients!!!
I don;t know what is wrong...
Here the steps I follow:
I have configured all the files /etc/hosts and netmon.cf on both nodes etc... all pre-req needs to be done...
I have also tested my diskheartbear (lun):
on node1:
/usr/sbin/rsct/bin/dhb_read -p rhdisk2 -r
Receive Mode:
Waiting for response . . .
Magic number = -2023406815
Magic number = -2023406815
Magic number = -2023406815
Magic number = -2023406815
Magic number = -2023406815
Magic number = -2023406815
Link operating normally

on node2:
# /usr/sbin/rsct/bin/dhb_read -p rhdisk2 -t
Transmit Mode:
Magic number = -2023406815
Detected remote utility in receive mode. Waiting for response . . .
Magic number = -2023406815
Magic number = -2023406815
Link operating normally

Topology:
1. Define the name of the cluster
smitty hacmp -> Extended Configuration -> Extended Topology Configuration -> Configure an HACMP Cluster -> Add/Change/Show an HACMP Cluster

2. Configure the nodes of the cluster
smitty hacmp -> Extended Configuration -> Extended Topology Configuration -> Configure HACMP Nodes -> Add a Node to the HACMP Cluster

3. Configure network and device
smitty hacmp -> Extended Configuration -> Extended Topology Configuration -> Configure HACMP Networks
I have create 2 network 1 on the public vlan (where I have my service, boot and stanby) and another network on the private (where I have the ip of the cluster heartbeat) and I have create 1 device diskhb

4. Configure HACMP communication interfaces/devices.
smitty hacmp -> Extended Configuration -> Extended Topology Configuration -> Configure HACMP Communication Interfaces/Devices -> Add Communication Interfaces/Devices (Select Add Pre-defined Communication Interfaces and Devices and Select Communication Interfaces and Select the network)
I did that for the IP boot, standby, clusterheartbeat and for my disk heartbeat

5. Configure Resource IP/Label

When I tried to verify and synchronise it failed...

here the result:
# /usr/es/sbin/cluster/utilities/cldare -t

Verification to be performed on the following:
Cluster Topology
Cluster Resources

Retrieving data from available cluster nodes. This could take a few minutes......

Verifying Cluster Topology...

WARNING: node01: Read on disk /dev/hdisk2 failed.
Check cables and connections.
A reserve may be set on that disk by another node.

WARNING: node02: Read on disk /dev/hdisk2 failed.
Check cables and connections.
A reserve may be set on that disk by another node.

ERROR: Network: net_ether_01 requires a service IP label.


Verifying Cluster Resources...

WARNING: The service IP label: node02 on node: node01 is not configured
to be part of a resource group. It will not be acquired and
used as a service address by any node.

WARNING: The service IP label: node01 on node: node01 is not configured
to be part of a resource group. It will not be acquired and
used as a service address by any node.


Verification exiting with error count: 1

cldare: Failures detected during verification. Please correct
the errors and retry this command.

Verification has completed normally.



=> when it said "ERROR: Network: net_ether_01 requires a service IP label." I have already tried to create a service IP label but ...
Well I don;t know what to do ..

Thanks in advance for your help!
Cheers
Al
 
I don't see your resource groups from what you mentioned! how many resource group do you have? Normally the service IP address is associated with a resource group!

What does ifconfig -a show?

Regards,
Khalid
 
Hi,

I don;t have created yet the resource group. I was testing first that the definition will work before continuing on the configuration.

here the result of ifconfig on bot nodes:

node1:
# ifconfig -a
en0: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
inet 10.60.34.70 netmask 0xffffff00 broadcast 10.60.34.255
tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en1: flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),PSEG,LARGESEND,CHAIN>
inet 192.167.100.40 netmask 0xffffff00 broadcast 192.167.100.255
tcp_sendspace 131072 tcp_recvspace 65536
en3: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
inet 10.60.34.71 netmask 0xffffff00 broadcast 10.60.34.255
tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en4: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
inet 10.60.34.72 netmask 0xffffff00 broadcast 10.60.34.255
tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
lo0: flags=e08084b<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1


node2:
# ifconfig -a
en0: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
inet 10.60.34.73 netmask 0xffffff00 broadcast 10.60.34.255
tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en1: flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),PSEG,LARGESEND,CHAIN>
inet 192.167.100.41 netmask 0xffffff00 broadcast 192.167.100.255
tcp_sendspace 131072 tcp_recvspace 65536
en3: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
inet 10.60.34.74 netmask 0xffffff00 broadcast 10.60.34.255
tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en4: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
inet 10.60.34.75 netmask 0xffffff00 broadcast 10.60.34.255
tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
lo0: flags=e08084b<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
 
Hi

I have found my error!!!! With VIOC we need to use IPAT via Aliases!!! and I thought I was using it but looks like I didn;t do it properly so now all working fine!!!!

Thanks again for help
Regards
Al
 
Alexia,

Glade you found your error. But to go back to what you said previously, a service IP address needs to be inside a resource group (even if you are using aliasing). It has to be part of resource group so that we you failover, the resource group will be moved along with the service IP address to the standby node!

Regards,
Khalid
 
Khalidaa
You;re right I am agree with you. But I was starting my cluster configuration and before create the resource group I wanted to be sure that the topology was correct and would be able to synchronise this first and then create the resource group and synchronise again.

Thanks a lot for your help!!!!
Cheers
Al
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top