Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Importvg issue

Status
Not open for further replies.

cts123

Technical User
Feb 28, 2007
108
IN
Hi,

I have a situation here. I have two AIX servers ( Server-A and Server-B)and they are under HACMP (4.5 ..very old version). In ideal case the "lsvg -o" in server-A shows wms_vg1, wms_vg2 and rootvg. and for Server-B it showing wmsapp_vg1.

Recently I have added one SSA drawer in server-A and did the SSA cable looping. The SSA drawer comes with full 16 disks. All the disks got detected by the server-A. We extended the volume group wms_vg2 with 12 disks from new drawer and it went fine. Extended the three LVs and File system and it all went fine.

The problem happened (seems I overlooked that)in server-B. What I did was I exported the wms_vg2 in server-B and tried the importvg and it didn't find out all supporting disks, reason when I ran cfgmgr in Server-B it detected all the disks but all of them were showing no PV-ID.

Later I did "chdev -l hdiskZ -a pv=yes" to get the PV-ID and to some extent the volume group got imported. But it was not ok, seems the new file system which I created in Server-A it's not showing in server-B. The reason is most likely, in between the process the PV-ID allocated disks not got imported.

If I do "lsvg -o" in server-B it's showing wms_vg2 and wmsapp_vg1 which is not supposed to be. And if I do varyoffvg for wms_vg2, it's not varying off and showing the LV is open. I had to do like, assign the PVID in Server-B and then do importvg. But I didn't do which I did mistake. Now in this situation what needs to be done. Can't I do a exportvg of wms_vg2 in Server-B and do import again?

Please note : At this time the wms_vg2 volue group in server-B is already varied on state ( since lsvg -o is showing that). I want to get rid of it. I am 100% sure, the improper importvg of volume group will create major problem when actual fail-over happen by HACMP. It will not find the LV and supporting File System in Server-B.

Can you suggest what I can I do at this moment? Can't I export wms_vg2 and do import again wms_vg2 when it's showing online? How can I get rid of it?

Thanks a lot.

-Sam






 
You already had PVID set on nodeA during VG extending!

It was bad idea to set PVID on nodeB once again before importvg there.

The PVIDs of new disks were not visible on nodeB (althougth you had them on nodeA after extendingvg there) because you probably run cfgmgr on both nodes after you added new SSA disks. The procedure of adding brand new SSA disks (without PVID set) to cluster nodes is (sequence is important):

1. on nodeA run "cfgmgr" or "cfgmgr -l ssar"
2. on nodeA run "chdev -l hdiskX -a pv=yes" for all new ssha disks
3. on nodeB run "cfgmgr" or "cfgmgr -l ssar"
4. on nodeA used C-SPOC to "extendvg" (no manual importvg on nodeB will be needed then)

In your case, you had no PVIDs of new disk on nodeB to fix the it enough was to "rmdev -dl hdiskX" (all new hdisks without PVIDs) and run "cfgmgr -l ssar" again. The disks would be configured with PVIDs as on nodeA.

When after exteding VG on nodeA you had run "chdev -l hdiskX -a pv=yes" I think you changed PVID which was set on nodeA.

Nevertheless I would recommend to stop cluster, exportvg wms_vg2 on nodeB, "rmdev -dl hdiskX" (remove all SSA logical hdisks) on nodeB, run cfgmgr again on nodeB, check if PVIDs of hdisks configured in wms_vg2 on nodeA (lspv) are available on nodeB:
- if yes then do importvg,
- if not than I guess the PVID change you run on nodeB replaced the one configured on nodeA during extendvg. In that case I don't know easy procedure to restore right PVIDs (the ones which were stored in VGDA during extendvg on nodeA)

There is only one method known to me (I checked it and worked in my case) describing how to write old PVIDs (in your case the ones you have on 'lspv' output run on nodeA) onto hdisks (found on

"
One method to get the volume group back is to write the old PVID onto
the disk. Here is a way to do that:


1) Translate the ORIGONAL PVID into the octal version. Take every 2
digits of the hex PVID and translate it to octal. This can be done
by hand, calculator, script, or web page.

00012a3e42bc908f3 -> 00 12 a3 e4 2b c9 08 f3
Octal version -> 000 022 243 344 053 311 010 363

2) Write the binary version of the PVID to the disk by using the octal
values. Each octal char is lead with a backslash-Zero "\0". Do
not use spaces or any other characters except for the final \c to
keep from issuing a hard return.

# echo "\0000\0022\0243\0344\0053\0311\0010\0363\c" |dd
of=/dev/hdisk0 bs=1 seek=128

Use the above info at your own risk.

-dan"

(after that, if you can remove again all hdisks and recover them again checking if PVIDs on both cluster nodes are the same and then reimport sharedVG)

I hope you will have no need to use such brute way and someone will give you better solution. I also think you should also ask for IBM support in that case.
 
Thanks a lot ogniemi,

I just compared the disks PVID assigned for that volume group in both the servers. It's all same PVID except only one disk. I believe during importvg, it got imported into ODM without that disk information.

At this moment I believe I have to stop the cluster in server-B, varyoffvg wms_vg2. Exportvg wms_vg2 and reimport again. I will keep you posted after doing this. But I have to do this in maintenance window.

Thanks again.

-Sam
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top