Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Volume Group Status Area - Structure Definitions.

Status
Not open for further replies.
Dec 5, 2002
109
NL
Does anyone know the structure definitions for the VGSA. ie, location on disk, data type, etc?

I've been able to empirically identify the stale PP map starts at offset 65548, but cannot find any official docs from IBM on the subject.

Any pointers or resources appreciated.
 
Hi Sectorseveng,
see files /usr/include/lvmrec.h and /usr/include/lvm.h
Boris
 
Thanks Boris,

Unfortunately /usr/include/lvmrec.h is part of the problem and I'm having difficulty understanding some of the output.

Example, reading the disk using the structure defined in lvmrec.h gives a PSN of 136 for the VGDA which is as expected, but the vgda_len keeps returning 2098 which is a bit confusing.
I thought that should be 512 max because there is only one physical sector reserved for the vgda, and using 2098 as the length puts the end somewhere in the LV information records.

Regards,
Clive


 
You said VGSA in your original post and your last post mentioned VGDA. The VGSA is 127 bytes (1 bit for each byte, hence 1016 partitions).

The VGDA structure info as far as i know is confidential IBM data and isn't released.
 
You may be thinking of the LVCB (Logical Control Volume Block) which is 512 bytes in a logical volume.
 
It's VGSA, VGDA, LVCB and probably something else I don't even know about.

Background to this is that a customer asked me if there was any way around the stale PP with no good copy to synchronise from situation. This can occur when using LVM mirrors with parallel sheduling and something goes awry. At the moment the only thing that can be done to fix the situation is to back up the data, delete & recreate the LVs and restore the data.

As the data integrity in this case is being maintained by the application (RDBMS with redo logs etc) what we functionally have is a solution which is the same as if we could magically tell the LVM to forget about the stale PPs and just carry on, except that the current solution requires a large outage for the backup/restore.

I've been able to identify some of the locations which denote staleness but not all, so it looks like unless IBM release some docs the message to the customer is "Sorry, cannot be done".
 

Take a look at the redbook "LVM concepts".

Cheers Henrik Morsing
IBM Certified AIX 4.3 Systems Administration
 
Everyone, thanks for the pointers, but I'm more confused than ever before.
Has the stale PP with no good copy problem been fixed in AIX 5.1?

On a 5.1 system I created a VG with 2 disks, set up a job to write constantly to a FS and then pulled one of the disks, creating stale PPs.
I then exported the VG and created another VG on the good disk, put back the missing disk and imported the 1st VG from it.

I was then able to delete the 'good' copy with no whining from lreducelv.

If IBM have already solved the problem then I don't need to.
 

But how can that 'problem' be solved?? The VGSA is not updated on the disk you're pulling out so there's absolutely nothing stopping it from being imported.

Cheers Henrik Morsing
IBM Certified AIX 4.3 Systems Administration
 
It's not the import that's the problem. It's the dropping of the 'good' (but no longer existing) copy. At 4.3.3 attempting to do the reducevg produced the message:
0516-076 lreducelv: Cannot remove last good copy of stale partition.
Resynchronize the partitions with syncvg and try again.

At 5.1 I am not seeing this message.

Regards,
Clive
 
reducevg -d -F
should do the trick (used it already)
 
gileb - Doesn't work for me

0516-076 lreducelv: Cannot remove last good copy of stale partition.
Resynchronize the partitions with syncvg and try again.

Regards,
Clive
 
I had a similar problem with a wrong 'stale' information, - stale pps on BOTH disks - so the only way for me to solve it was that way, and it worked... But in your case, i don't see when you have to do a reducevg removing the 'good' disk from the vg ??
 
I assume your volume group won't vary on? When you run varyonvg do you see PVNOTFND or PVINVG?
 
gileb - this can happen if the disk that LVM thinks is 'good' is not available. ie, power supply problems take out both disks and the power surge kills the 'good' disk or if you do a flashcopy of an LVM mirrored VG (where the mirror copy is on a different ESS and the LVs are in use at the time the flashcopy takes place).

unix2dae - no problem with the varyon (apart from the fact that it cannot sync).

Everyone - This is not a problem at 5.1 ML3 (at ML2 the importvg fails and whinges about the LTG on the missing disk). I've got a call in to IBM and if they confirm that this is supposed to work this way (and won't be changed) then I'm well pleased.
 
i know it may happen, but in your case, i don't see why it happens... As i told you previously, it happened once for me, and the reducevg -d -F did the trick. I don't know why it didnt help in your case.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top