Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

HANFS and HACMP

Status
Not open for further replies.

polani

Instructor
Jun 4, 2003
159
CA
Guys

I have one situation , where a two nodes cluster with cascading RG
configuration is working fine ( no HANFS configuration...)
The shared disk is on DS4800 and it contains a filesystem ( /oltlg1) ,
which has been exported from NodeB to NodeA over NFS...

Problem is that , when NodeB fails , san disk moves from NodeB to Node
A along with IP address without any problem , however application
could not be started on Node A, as by any mean ,it is still looking
for NodeA:/oltlg1;even NFS client on nodeA comes in some sort of hang
situation ( df command output hangs)...application vendor is now
saying that application still looking for NFS mounted filesystem
rather than locally available filesystem...

Is there any way to configure NFS on nodeB to export using service ip
address , rather than using nodename of NodeB? Will this approach
solve this issue?

OR Should i go straightforward for HANFS configuration rather than
simple cascading RG configuration? As per my understanding , it means
just putting NFS as part of RG? Will it be a big change in HA
configuration ( beside RG configuration changes ). I would appreciate
any document link etc...

Please advice

Regards



Here comes polani Once again!!!

P690 Certified Specailist
HACMP & AIX Certified Specailist
AIX & HACCMP Instructor
 
I think you need to update your HACMP startup/failover scripts. Instead of specifying nodeA or nodeB in the scripts, use the service name or service IP address.
 
It depends on what your NFS is being used for? If the availability of the NFS is important to the data (ie the data on the SAN that is backed up by HACMP) then it is better to use HANFS otherwise then it is safer to use your startup/stop scripts to control them!

I have an NFS for oracle patches on a two-node HACMP cluster which i control them by the startup/stop scripts as their availability is not critical to my RG backed up by the HACMP!

I hope i'm explaining this well coz its hard to do so :~\

Regards,
Khalid
 
If you are going to use the startup/stop scripts make sure that their auto-mount option is set to false! meaning that your scripts will be controlling the mount part!

Regards,
Khalid
 
Guys

Thanks for response


The main question still remains the same. How can i force my NFS server to use HACMP service IP address for exporting NFS filesystem ( instead of hostname)...

I mean it should be like that

ServerA_svc:/olfs instead of ServerA:/olfs on standby node ( NFS client)of HACMP cluster....

Please elaborate on NFS configuration required in that case

Regards

Here comes polani Once again!!!

P690 Certified Specailist
HACMP & AIX Certified Specailist
AIX & HACCMP Instructor
 
On a client I've seen that they change the hostname in the HACMP start/stop scripts, a kind of service hostname in addition to the service IP.
This way the nfs FS is exported as service_hostname, doesn't matter the node where it's up.
 
polani,

have you maybe fixed this problem? I met the same one and still looking for a solution...
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top