I was getting ready to run a reducevg command on hdisk1 on a server after unmirroring it in preparation for an alt_disk_migration
Problem is - I was on another screen and ran the reducevg command on hdisk1 on a server with rootvg mirrrored and running. Not too bad, except the dump area was not mirrrored and the command errored - still not too bad, except now lspv shows
[server1:me]> lspv
hdisk0 000170280e854766 old_rootvg
hdisk1 00017028300f1aba rootvg active
hdisk3 00017028dbfde1ef optvg active
hdisk4 000170286c20ff2c bkupvg active
hdisk5 00017028507d1725 optvg active
hdisk6 0001702871fd6633 bkupvg active
hdisk8 00017028b945cc22 vg01 active
Notice that hdisk0 is now listed as old_rootvg
and to compound the issue whomever originally set this up did not run a boot volume on hdisk1 after mirroring -
[server1:me]> lspv -l hdisk1
hdisk1:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
hd6 168 168 00..108..60..00..00 N/A
hd8 1 1 00..00..01..00..00 N/A
hd4 30 30 00..00..30..00..00 /
hd2 150 150 20..00..17..108..05 /usr
hd9var 60 60 60..00..00..00..00 /var
hd3 60 60 00..00..00..00..60 /tmp
Notice no hd5 -
When I tried to run bosboot command to force it -
# bosboot -ad /dev/hdisk1
0516-306 lslv: Unable to find hd5 in the Device
Configuration Database.
0301-168 bosboot: The current boot logical volume, /dev/hd5,
does not exist on /dev/hdisk1.
Since hdisk0 is sitting in a state of old_rootvg, I cannot do anything with it until I vary it on.
I have changed the bootlist back to hdisk0 so in case the system does reboot it might actually come back up. It is up and running now.
What steps can I take to get this server back in shape? I would have suspected that hdisk1 would be the problem disk and I could have removed it, then reattached it to rootvg.
Two weeks ago I migrated the system from 5.2.4 to 5.3.3. I DID NOT use an alt_disk_migration.
Thank you
Potentially Toasted!
Problem is - I was on another screen and ran the reducevg command on hdisk1 on a server with rootvg mirrrored and running. Not too bad, except the dump area was not mirrrored and the command errored - still not too bad, except now lspv shows
[server1:me]> lspv
hdisk0 000170280e854766 old_rootvg
hdisk1 00017028300f1aba rootvg active
hdisk3 00017028dbfde1ef optvg active
hdisk4 000170286c20ff2c bkupvg active
hdisk5 00017028507d1725 optvg active
hdisk6 0001702871fd6633 bkupvg active
hdisk8 00017028b945cc22 vg01 active
Notice that hdisk0 is now listed as old_rootvg
and to compound the issue whomever originally set this up did not run a boot volume on hdisk1 after mirroring -
[server1:me]> lspv -l hdisk1
hdisk1:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
hd6 168 168 00..108..60..00..00 N/A
hd8 1 1 00..00..01..00..00 N/A
hd4 30 30 00..00..30..00..00 /
hd2 150 150 20..00..17..108..05 /usr
hd9var 60 60 60..00..00..00..00 /var
hd3 60 60 00..00..00..00..60 /tmp
Notice no hd5 -
When I tried to run bosboot command to force it -
# bosboot -ad /dev/hdisk1
0516-306 lslv: Unable to find hd5 in the Device
Configuration Database.
0301-168 bosboot: The current boot logical volume, /dev/hd5,
does not exist on /dev/hdisk1.
Since hdisk0 is sitting in a state of old_rootvg, I cannot do anything with it until I vary it on.
I have changed the bootlist back to hdisk0 so in case the system does reboot it might actually come back up. It is up and running now.
What steps can I take to get this server back in shape? I would have suspected that hdisk1 would be the problem disk and I could have removed it, then reattached it to rootvg.
Two weeks ago I migrated the system from 5.2.4 to 5.3.3. I DID NOT use an alt_disk_migration.
Thank you
Potentially Toasted!