Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

hacmp need some help about IP?

Status
Not open for further replies.

roberto33

Technical User
Jan 4, 2008
2
NZ
Hi

I am confused... I am trying to configure HACMP but I have some issue to understant how many IP we need...

I thought for hacmp we just need 3 ip adresses:
boot (this one should be on a different subnet)
service
standby

and another non-ip for heartbeat via disk heartbeat (this is what I will use (1 lun))

but I don;t understand why we need to add another IP adress name cluster heartbeat??? Is it not the role of the disk heartbeat ? Why do we need 4 IPs... Please help me understand this one!!!

Thanks in advance.
Rob.
 
Hi Rob,

Ok to make you understand that first of all let us assume that you have two nodes cluster!

Now there are two ways to configure the IPs! The default is IP Address takeover via aliasing and the other is IP address takeover via replacement!

For each of the above you need at least 4 IPs for the heartbeating (each 2 on a different subnet than the other) and you need to have service IP addresses according to the number of resource group you have. This service IP address need to be on a different subnet than the above if you are using IP address takeover via aliasing but you can have it on one of the above subnets if you are using IP address takeover via replacement.

I hope i didn't loss you so far and to make you understand better lets take an example:

IPAT via aliasing:

10.1.1.200 service IP

Node 1:
10.1.2.1 Node1_heartbeat1
10.1.3.1 Node1_heartbeat2

Node 2:
10.1.2.2 Node2_heartbeat1
10.1.3.2 Node2_heartbeat2

(for the replacement, it will be similar to the above but you may choose to have the service IP address to be for example 10.1.2.200 or 10.1.3.200 or leave it same as above)

Having another type of heartbeating (beside the disk heartbeat) is to ensure that in case of your disk heartbeat failure, you would have another heartbeat path. To avoid having a split brain in your cluster (where each node trys to grap the resource group because there is no communication between them and if it happen that your disks are mirrored then there might be some data corruption because each node might take a copy of the mirror and update the data on its side!)

please let me know if this is not what is in your mind!

Regards,
Khalid
 
Hi Khalidaaa,

Thanks a lot for the explanation... now it is a more clear...
I was by default considering 2 nodes..

But in our configuration we have only 1 ip for the heartbeat
on a different vlan (private network) and 3 ip on same vlan/subnet for boot, stby and service and we have 1 disk heartbeat.

Do we really need 2 IP clhb ?

Thanks again for the links.
Cheers
Rob.
 
Rob,

well if you could please list the IPs for a better view, i would answer you more precisely but the reason for the other 2 IPs is for more redundancy.

Like for example if we have this config:

10.1.1.200 service IP

Node 1:
10.1.2.1 Node1_heartbeat1
10.1.3.1 Node1_heartbeat2

Node 2:
10.1.2.2 Node2_heartbeat1
10.1.3.2 Node2_heartbeat2

and consider that you have two physical ethernet per node then you might have this config on node1

ent0
serviceIP
Node1_heartbeat1 (both IPs are aliased on the same enthernet)

ent1
Node1_heartbeat2

On node2

ent0
Node2_heartbeat1

ent1
Node2_heartbeat2

Now if ent0 on node1 fail, the serviceIP address will be swapped with ent1 on the same node and you end up with this:

Node1
ent0
Node1_heartbeat1
ent1
serviceIP
Node1_heartbeat2

On node2
ent0
Node2_heartbeat1
ent1
Node2_heartbeat2

In this case you still have heartbeating on the ent1 of both nodes! (between Node1_heartbeat2 & Node2_heartbeat2)

In your case If your network heartbeat on the first node dies then a failover will occur for the whole resource group to the second node and this will kill all the sessions with your node1 (not like the first case)

The bottom line, it is IBM recommendation to have this kind of configuration to safegaurd your data! After all this is not a bible and you can have your own configuration but then if something goes wrong and you call them for help they will blaim you for not following their best practices :)

I hope this is clear and if not please let me know so i can explain it in a different way.

Regards,
Khalid
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top