Helping out another organization with a server they have. It had 30G disks in a mirror on an LSI disc controller. We changed the disks one at a time so the mirror is now made of 146G disks, so it shows up in the bios as one 146G disk. The LVM in OS still insists on seeing the disk as 30G, though, and refuses to let me grow the size insisting no room is available. How do I get LVM to understand that the size of the disk has now changed in the bios?
The drive as it shows up in dmesg when it first appears:
How the OS sees the LVM partition:
What happens when I try to grow the LVM partition:
Where is the stuck information in LVM that is still telling it this is a 30G disk, and how do I change it to get it to realize it can be more?
The drive as it shows up in dmesg when it first appears:
Code:
sd 2:0:0:0: [sda] 286511105 512-byte logical blocks: (146 GB/136 GiB)
How the OS sees the LVM partition:
Code:
[root@postgres-02 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root
28322828 10549072 16335036 40% /
What happens when I try to grow the LVM partition:
Code:
[root@postgres-02 ~]# lvextend -f -L130G /dev/mapper/VolGroup-lv_root
Extending logical volume lv_root to 130.00 GiB
Insufficient free space: 26255 extents needed, but only 0 available
Where is the stuck information in LVM that is still telling it this is a 30G disk, and how do I change it to get it to realize it can be more?