codewallah
Programmer
Hi,
I currently have a 6 x 18gb VG with 4 LVs, each of which was originally set up to use 2 disks and then 2 extra copies were made of each LV using the next 2 disk pairs so, hdisk2/3 = 1st copy, hdisk4/5 = 2nd copy and hdisk6/7 = 3rd copy. I want to not use triple mirroring any longer as I need to make better use of the disks and I therefore wish to use 3 disks and have 3 for mirroring. The PPSize is 32mb and each disk has 543 PPs available (max). I need to maximize the space and speed but have read that you cannot stripe and mirror as well. If I need LV1 to have 840 PPs, LV2 to have 480 PPs, LV3 to have 195 PPs and LV4 to have 90 PPs and I want 2 copies of each LV, how would you set out the new VG. My current idea is to use the maximum spread option, place 1/3 of each total PP requirement on hdisk2/3/4 and then either to mirror the VG on 5/6/7 or to create a copy of each LV on 5/6/7. I do not use savevg to backup but find | backup -i..
Your thought / recommendations would be appreciated.
Thanks,
Brian.
I currently have a 6 x 18gb VG with 4 LVs, each of which was originally set up to use 2 disks and then 2 extra copies were made of each LV using the next 2 disk pairs so, hdisk2/3 = 1st copy, hdisk4/5 = 2nd copy and hdisk6/7 = 3rd copy. I want to not use triple mirroring any longer as I need to make better use of the disks and I therefore wish to use 3 disks and have 3 for mirroring. The PPSize is 32mb and each disk has 543 PPs available (max). I need to maximize the space and speed but have read that you cannot stripe and mirror as well. If I need LV1 to have 840 PPs, LV2 to have 480 PPs, LV3 to have 195 PPs and LV4 to have 90 PPs and I want 2 copies of each LV, how would you set out the new VG. My current idea is to use the maximum spread option, place 1/3 of each total PP requirement on hdisk2/3/4 and then either to mirror the VG on 5/6/7 or to create a copy of each LV on 5/6/7. I do not use savevg to backup but find | backup -i..
Your thought / recommendations would be appreciated.
Thanks,
Brian.