Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

New Volume Group Add under HACMP environment 1

Status
Not open for further replies.

samirm

Technical User
May 12, 2003
80
US
Hi,

I have two servers which are under HACMP 4.5 and running on AIX 5.1. The database vg is varied on in database server and the application server is varied on in application server. The volume group PP size 32MB and its holding 16 no. of disks each 36GB.

Now we want to add more PV and its getting stuck while extedning the volume group. with 32MB PP size we can accomodate maxm. 16 nos. of PV for a volume group.

Hence we have decided to create a new volume group and that will be defined in HACMP config as well, so that during failover it should also get varied on to alternate box.

I want to know the steps to be followed .

Node -A ( Application )
=======
# lsvg
rootvg
wmsapp_vg1
wms_vg1

# lsvg -o
wmsapp_vg1
rootvg

Node -B ( Database )
=======
# lsvg -o
wms_vg1
rootvg


lspv
====

hdisk0 0002f82a7d4f23a1 rootvg
hdisk1 0002f82a267b7231 rootvg
hdisk3 0002f82aaf9044a0 wms_vg1
hdisk2 0002f82ab7fa33ea wms_vg1
hdisk12 0002f82aade6ddee wms_vg1
hdisk10 0002f82a2f5eca73 wms_vg1
hdisk18 0002f83a26f9b017 wms_vg1
hdisk13 0002f82aadfff39b wms_vg1
hdisk21 0002f82ab7fa3925 wms_vg1
hdisk16 0002f82aadfff903 None
hdisk17 0002f82aae06c2bf wms_vg1
hdisk14 0002f82ab53ab083 wms_vg1
hdisk19 0002f82ab0ae3146 None
hdisk24 0002f82ab0ae3698 wms_vg1
hdisk26 none None
hdisk25 none None
hdisk20 0002f83ab604f531 wmsapp_vg1
hdisk33 0002f83a26f9aaab None
hdisk22 0002f83ab604faa8 wmsapp_vg1
hdisk23 0002f83ab6050015 wmsapp_vg1
hdisk7 0002f82aaf904b5c wms_vg1
hdisk8 0002f82ab4cc7a41 wms_vg1
hdisk9 none None
hdisk28 0002f82ab0ae411d wms_vg1
hdisk4 0002f82ab53aab26 wms_vg1
hdisk5 0002f82ab53fe9bd wms_vg1
hdisk6 none None
hdisk30 0002f83ab6050afb wmsapp_vg1
hdisk31 none None
hdisk32 0002f83ab605116c wmsapp_vg1
hdisk15 0002f83a26f9a4ab wmsapp_vg1
hdisk34 0002f82a2ba063ef wms_vg1
hdisk27 0002f83a7f1e154f None


 
Never did it yet, but how I would do it, maybe someone has corrections if I am wrong or forgot something, thanks ^^

1. Write down your PVID of the new VG on node A to compare them later on node B.
2. Make the disks reachable from both nodes ie. do the zoning or whatever type of disks you are using.
3. If you don't see the disks on node B already, have a cgfmgr running and varoffvg the VG on node A so the disks are not locked anymore.
4. importvg the new VG and it's disks on node B so you should now see them with lspv| grep <vgname>, maybe compare their PVIDs to be sure - hdisk26 on node A can be hdisk32 on node B.
5. If not already done, varyon the VG (I remember VGs should be configured "AUTO ON: no" so that HA has control of which VGs to be varied on or not.
6. Mount the FS if not automatically done, check the data on the disks.
7. Configure HACMP via smitty and the new VG to the resource group where you need it on both nodes.
8. Sync the cluster.
9. Test it on a weekend if possible ;) We sadly have no testcluster at our company ; ;

laters
zaxxon
 
Forgot to add that you have to care that you take new LV names and mountpoints so you don't accidentally mount the new ones over existing ones when a takeover happens.

laters
zaxxon
 
Thanks Zaxxon,
For this wonderful tips.

I will review this and for me the situation is same. I don't have any test environment in place, hence want to confirm first before taking step against Production.

The disk we have are SSA. All are 36GB.
And I can see all the disks from both the servers.

Sam
 
hi ,

If you can see all the SSA disks from both servers , does that mean you have a disk tray connected to each server ?

If so will you mirror the logical volumes from 1 disk tray to the other ?

Just to elaborate on zaxxons steps

Note:- you can do the below via hacmp cspoc or manually
below is manually

1. create your volume groups note , you must note down the major number of the new volume group ( this will be used to import the new volume group on the other node) or create a volume group with a major number specified.
Nb:- you can find out the next available major number by issuing command /usr/bin/lvlstmajor

2. make sure the hdisks are the same on both servers
this can be done by issuing command lsattr -El hdis? | grep conn
( this lists serial number of SSA disks , make sure you know the hdisk numbers on both servers , this will be used for import the volume group) because in some situations hdisk4 for example on node 1 is hdisk 6 on node 2)

3. once volume group is created on node 1 , create logical volume and filesystems . ( note you don't want volume group to be varied on automatically and turn quorum off)

4. on second node , make sure the same disk has pvid
run lspv ( check for hdisk? ) , you may have to make this disk available with pvid if none is present ( chdev -l hdisk? -a pv=yes )

5. on 1st node umount filesystems amd varyoff volume group

6. on second node , import volume group

importvg -V >major number> -y <volume group> hdisk? (disk as specified in step 2)

7. once imported , check smitty vg , ( volume group and make sure volume group is not set to varyon automatically and turn quorum off)

8. you can even try and mount filesystems , on second node.

9. Once happy umount filesystems and varyoff volume group

10. smitty hacmp , add the new volume group in the resource
group and synchronise the cluster.

11. Then perform a simple failover test , to make sure volume group gets varied on ,the other node.
 
Wonderful DSMARWAY ..for this step.
But the file systems which I am creating in node A from where I am doing export, do I need to put that fiel system information too under HACMP setting ?

As at present I can see the list of the file systems hav ebeen defined under clutser topology information.

Thanks again ..

Sam
 
hi ,

under the HACMP resource group , add the new volume group name in the volume groups field
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top