Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

ugly VG

Status
Not open for further replies.

Mag0007

MIS
Feb 15, 2005
829
US
I have a VG like this.
vg06:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
loglv09 jfslog 1 1 1 closed/syncd N/A
fslv02 jfs 496 496 1 closed/syncd N/A
fslv03 jfs 660 660 2 closed/syncd N/A

vg06:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk19 removed 539 538 108..107..107..108..108
hdisk20 removed 539 43 00..00..00..00..43
hdisk21 removed 0 0 00..00..00..00..00
hdisk22 removed 0 0 00..00..00..00..00
hdisk1 removed 539 0 00..00..00..00..00
hdisk2 removed 539 418 108..00..94..108..108
0516-304 lsvg: Unable to find device id 000274230789b1e7 in the Device
Configuration Database.
000274230789b1e7 removed 539 539 108..108..107..108..108
hdisk4 active 539 539 108..108..107..108..108

Anyone know how I can mount these filesystems?

TIA!
(THIS WEEK IS ALMOST OVER!)
 
Hi,
this command will display the mount point of filesystem created on logical volumes fslv02 and fslv03
After, mount the filesystems on the locations displayed.

loglv09 is a jfslog managed by the OS, you can't do anything with.

Code:
lsfs | grep fslv0[23]

 
First you need to find out where the 'removed' PVs have gone to. Without them, you can't access the data. The only disk that is active is hdisk4 but that one's empty.

try
mkdev -l hdisk19
repeat for all 'removed' disks and see where that gets you.


HTH,

p5wizard
 
You can try to rebuild your odm information using the redefinevg command....

redefinevg -d hdisk1 vg06

This can mess things up worse or fix things perfectly. It has always helped in any case where I have needed to use it.

If redefinevg does not work sometimes the synclvodm command can help.


Jim Hirschauer
 
mkdev -l hdisks
show everything as avaliable.

lsfs does not have these filesystems.
/etc/filesystems does not have these too..

 
Can you still access the disks?

does

bootinfo -s hdisk19

still work? Should give a number (size of disk in MB)


HTH,

p5wizard
 
I still can't access the disks.
I remember forcing the importvg on these volumes

bootinfo gives me 4315MB

Here is a sample output of the LV, not sure if it helps.
lslv fslv02
LOGICAL VOLUME: fslv02 VOLUME GROUP: vg06
LV IDENTIFIER: 0002742316a3ef9c.2 PERMISSION: read/write
VG STATE: active/complete LV STATE: closed/syncd
TYPE: jfs WRITE VERIFY: off
MAX LPs: 512 PP SIZE: 8 megabyte(s)
COPIES: 1 SCHED POLICY: parallel
LPs: 496 PPs: 496
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY: middle UPPER BOUND: 32
MOUNT POINT: N/A LABEL: None
MIRROR WRITE CONSISTENCY: on
EACH LP COPY ON A SEPARATE PV ?: yes
 
Let me take a guess how you got here...

This server "inherited" a disk from another server. You noticed e PVID on it and tried to run importvg to get at the data. Importvg complained about missing members of the VG and not being able to import it. You decided to forcibly importvg it.

Now you have a VG which is imported, but it is missing 7 of its 8 member PVs. The one PV that you do have is an empty PV.

If there's no other valueable stuff on this server, you might try redefinevg or synclvodm as Jim pointed out, but if you can't risk crashing this system or the other VGs on it, I would suggest you DON'T try it.

So imho: No, you can't mount the filesystems...

HTH,

p5wizard
 
p5wizard:

As usual, thanks for the great support!

(wish I had support for this box, but it seems you are better then them)

again, THANX
 
Hey Mag,
For each hdisk that shows "removed", please try
chdev -l hdisk# -a pv=yes


and let me know if that helps.

Mohmin.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top