Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Problems with PV

Status
Not open for further replies.

nmorph

IS-IT--Management
Sep 27, 2004
19
DE
Hello
I have a cluster H70 with SSA tools. Normally it apears like that,
hdisk0 00572d0a7eb7912f rootvg
hdisk1 00572d0a4ee32230 rootvg
hdisk101 00572d2a5140d978 fhdba01_vg
hdisk103 00572d2a5140d310 fhdba01_vg
hdisk105 00572d2a5140a68a fhdba01_vg
hdisk107 00572d2a6692aad0 fhdba01_vg
hdisk109 00572d2a514071c7 fhdba01_vg
hdisk111 00572d2a5140dcae fhdae_a_vg
hdisk112 00572d0ad7c257c4 None
hdisk113 00572d2a5140d63b fhdae_b_vg
hdisk115 00572d2a5140e975 fhdae_a_vg
hdisk201 00572d2a5140f328 fhdba01_vg
hdisk203 00572d2a5140e64c fhdba01_vg
hdisk205 00572d0a4034dfc3 fhdba01_vg
hdisk207 00572d2a5140f67e fhdba01_vg
hdisk209 00572d2a5140f9a6 fhdba01_vg
hdisk211 00572d2a5140ecad fhdae_a_vg
hdisk212 00572d0ad7c25f3f spare02
hdisk213 00572d2a5140efe1 fhdae_b_vg
hdisk215 00572d0a40acfa91 fhdae_a_vg

but after a shutdown/restart the system look like this

hdisk0 00572d2a8001c396 rootvg
hdisk1 00572d2a4b3c03a8 rootvg
hdisk101 00572d2a5140d978 fhdba01_vg
hdisk103 00572d2a5140d310 fhdba01_vg
hdisk105 00572d2a5140a68a fhdba01_vg
hdisk107 00572d2a6692aad0 fhdba01_vg
hdisk109 00572d2a514071c7 fhdba01_vg
hdisk111 00572d2a5140dcae fhdae_a_vg
hdisk112 00572d0ad7c257c4 spare01
hdisk113 00572d2a5140d63b fhdae_b_vg
hdisk115 00572d2a5140e975 fhdae_a_vg
hdisk213 00572d2a5140efe1 fhdae_b_vg

Another fact is that I have stale from VG fhdba01_vg and some disks have the status removed.

fhdba01_vg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk101 active 271 161 27..26..00..54..54
hdisk103 active 271 182 55..00..19..54..54
hdisk105 active 271 196 55..04..29..54..54
hdisk201 removed 271 161 27..18..08..54..54
hdisk109 active 271 217 55..03..51..54..54
hdisk203 removed 271 182 55..00..19..54..54
hdisk205 removed 271 196 55..04..29..54..54
hdisk207 removed 271 211 55..44..04..54..54
hdisk209 removed 271 217 55..03..51..54..54
hdisk107 active 271 211 55..44..04..54..54

Does any one can help to fix this problem?

Best Regards

Nuno Catarino
 
What does the errorreport say?
 
hi ,

If you say you have a cluster do you mean HACMP ?
if so in order to shutdown /restart a server you have to stop the cluster so that the cluster varysoff volume groups
and umounts filesystems cleanly and then you shutdown/restart the server
you normally get removed state if the volume group thinks the physical volume is removed , you can change the characteristics back to available by issuing chpv -va disk

 
Hi

Do you have an HACMP cluster ?
if so are you stopping the cluster before rebooting the the server ? because the cluster will varyoff volume groups stop apps and umount filesystems ?

if you've rebooted , the sever thinks its lost half its disks hence your filesystems will show up as stale because they haven't been properly stopped by the cluster and thinks some disks are removed .

you can fix the removed disk scenario by doing chpv -va hdisk , this will make your disks available again and the resync the LV's

HTH
 
Hello

Thanks for the support

The server is on HACMP cluster, and is running Sparetools from IBM. All the times that we reboot the cluster we stop the cluster first.

So to solve my problem I sould stop the server, then do the chpv commando for each disk that have the removed status, and finally sync all the Vg.

It's all of this??

Thanks

NMC
 
hi ,
you shouldn't need to do this if you stop the cluster properly .

I would do the following :-
1. Firstly check the state of each volume group , make sure
all disks are available, if in removed state use chpv command to make them available .
2. Then stop the cluster
3. check the volume groups/filesystems controlled by your cluster are varied off and umounted successfully
4. Then do a reboot/shutdown
5. then start your cluster up and check the stae of the volume groups , all disks should be available.

try the above if you still get problems then post the results , we can then take a further look
 
Hello

I have reboot the servers and then I had to put the disks on active state and then to the syncvg to the vg. After that the servers when OK.

That's for all the support.

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top