Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations TouchToneTommy on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

HACMP adding files system error to 2nd node

Status
Not open for further replies.

baggetta

Technical User
Feb 27, 2003
116
US
I'm trying to add a file system on a exisiting vg created in smit (not smit hacmp). When I go into smit hacmp and try to add f/s to existing vg, the vg does not show up. Is this happening because the vg was created in smit and not "hamcp smit"? The error I get is :
cl_chfs: Error executing clupdatevg volumename vgstringID on node 1.

 
Not properly. Probably this VG is not part of any Resource Group, so HACMP doesn't consider it as a shared one
 
My mistake,the VG was created in "smit hacmp", if I do a listing of shared VG's is shows up. When I added a LV, i picked the shared VG in the list, and it added the LV, but was not mounted. Doing a lsvg -l vgname i could see that it was created, not sync and no mount point.
 
Here's what I currently see on this shared VG:

# lsvg -l prodjit2vg
prodjit2vg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
prodjit2extlv jfs 400 800 2 open/syncd /prodjit2ext
loglvjit2 jfslog 1 1 1 open/syncd N/A
prodjit2lv jfs 138 276 2 open/syncd /prodjit2


# lslv -l prodjit2lv
prodjit2lv:/prodjit2
PV COPIES IN BAND DISTRIBUTION
hdisk37 138:000:000 0% 000:000:000:034:104
hdisk35 138:000:000 0% 000:000:000:033:105


# lslv prodjit2lv
LOGICAL VOLUME: prodjit2lv VOLUME GROUP: prodjit2vg
LV IDENTIFIER: 000d391a00004c000000010471da904d.3 PERMISSION: read/writ
e
VG STATE: active/complete LV STATE: opened/syncd
TYPE: jfs WRITE VERIFY: off
MAX LPs: 1024 PP SIZE: 64 megabyte(s)
COPIES: 2 SCHED POLICY: parallel
LPs: 138 PPs: 276
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY: middle UPPER BOUND: 32
MOUNT POINT: /prodjit2 LABEL: /prodjit2
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
 
Well, normally HACMP behaves in this way:
On the node which currently "owns" the VG it creates the filesystem, mountpoint and mount the filesystems
On the other nodes it only modifies the ODM (and adds the relevant mount point)
Try to see if the directory (the mount point) does exist on the 2nd node
 
yes the directory of prodjit2 is on the other node (1).
now what? ...
any commands we use to see why it wouldn't expand the f/s on node 1?
 
Looking back at some notes when IBM did this, they created everything locally on node2. then put it into HACMP with the following:
smit hacmp
Cluster Resources/Discover Current Volume Group config
local config
look for the 2 hdisks we used
then change/show resources/attribues for a Resouce Group and check if the vg name was in the list
Synchronize Cluster Resources
Then Add a Shared Logical Volume
then at this point go into add a f/s to existing LV...

can someone verify the above if i sync up, will i be able to expand the f/s on the 2nd node?
 
try on the 2nd node the following
lsfs |grep name_of_filesystem (obiously name_of_filesystem is the name of the filesystem you have previously added)
 
I ran lsfs |grep filessystem and it came back.

/dev/prodjit2lv -- /prodjit2 jfs -- rw no no


 
This means that the /prodjit2 Filesystem has been added in the 2nd node.
Obviously you can expand it on only one node per time, it will be physically expanded on the node which "owns" the VG containing this filesystem and the related variations to ODM (regarding the logical volume on which the filesystem is built) will be propagated to the 2nd node
 
This is what I think as well, but I never encountered any errors like I do now about the 2nd node when expanding a file system. Will an importvg on the 2nd node update the ODM, or will this happen regardless when a fail-over happends?

The whole interesting part is that, this a VG that IBM added when we bought new disks. Now that it doesn't work for some reason, they want no part of it for support over the phone or email...they want to open a service call since we don't have HACMP support....good old IBM. Make it work when customer is with them, and then ensure it doesn't work when customer tries to add something later on...guess this is how they are making money. :)
 
Obviously a VG import will do .... but C-SPOC has been written exactly for situation like this.
If you have already done the fs enlargment now you can crossh check by issuing the following command on both the nodes:
odmget -q "name=the_lv_name AND attribute=size" CuAt

where the_lv_name is the name of the LV containing your FS
you can get it's name by executing:
df -k|grep your_fs_name|awk '{print substr($1,6)}'
 
I know but C-SPOC decided not to work this week and do the import VG into the ODM on the destination node. The file system is there which good, but again, for some reason, unable to update the ODM concerning size. Here's what I got with query "odmget -q prodjit2lv CuAt" on the destination node:

CuAt:
name = "prodjit2lv"
attribute = "lvserial_id"
value = "000d391a00004c000000010471da904d.1"
type = "R"
generic = "D"
rep = "n"
nls_index = 648

CuAt:
name = "prodjit2lv"
attribute = "copies"
value = "2"
type = "R"
generic = "DU"
rep = "r"
nls_index = 642

CuAt:
name = "prodjit2lv"
attribute = "label"
value = "/prodjit2"
type = "R"
generic = "DU"
rep = "s"
nls_index = 640

CuAt:
name = "prodjit2lv"
attribute = "size"
value = "240"
type = "R"
generic = "DU"
rep = "r"
nls_index = 647



When I run the same command on the local node where the VG is varried on I get some different results:
CuAt:
name = "prodjit2lv"
attribute = "lvserial_id"
value = "000d391a00004c000000010471da904d.3"
type = "R"
generic = "D"
rep = "n"
nls_index = 648

CuAt:
name = "prodjit2lv"
attribute = "copies"
value = "2"
type = "R"
generic = "DU"
rep = "r"
nls_index = 642

CuAt:
name = "prodjit2lv"
attribute = "stripe_width"
value = "0"
type = "R"
generic = "DU"
rep = "r"
nls_index = 1100

CuAt:
name = "prodjit2lv"
attribute = "size"
value = "210"
type = "R"
generic = "DU"
rep = "r"
nls_index = 647

CuAt:
name = "prodjit2lv"
attribute = "label"
value = "/prodjit2"
type = "R"
generic = "DU"
rep = "s"
nls_index = 640

 
Uhm ... this looks like on the 1st system the prodjit2lv is of 240 partitions and 210 on the 2nd system.
Well ... you could change that value with odmchange but .... (the same way used by C-SPOC) ... or ... if possible ... to re-import the VG on the 2nd node ...
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top