Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

migrate aix 5.2 from shark to dmx

Status
Not open for further replies.

rondebbs

MIS
Dec 28, 2005
109
US
Hello, we will be using mirrovg to migrate our aix 5.2 tl9 VGs to the new dmx. This works well becauce we can do the mirrorvg while the system is up and running - no outage. We will extendvg with new dmx disks and then mirrorvg from the shark vpaths to the new dmx disks. Finally we will unmirrorvg the old vpaths and finally reducevg them out of the VG. Now the VG sits only on the dmx.

One of the VGs (ora_prd below) is a problem becuse it already maxed at 128 PVs, so we can't extendvg it. 127 of these PVs have no free PPs. The VG is over 2TB. My thought is to take a short outage and manually copy the /u01/data4 file system to the new dmx disks. Once that copy is done modify Oracle to point to the new file system on the dmx and bring the app back up. Since we are not currently using host based stripping I was hoping that would free up several PV's. Once the PVs are freed I can reducevg them out of the VG. This would allow me to extendvg with new larger dmx disks. Once I have enough dmx disks in the VG I can do my normal mirrorvg. Is this the best strategy or should I do something else?


ora_prd:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
ora_prd_app jfs2 230 230 3 open/syncd /u01/app
loglv00 jfs2log 1 1 1 open/syncd N/A
ora_prd_data jfs2 8766 8766 66 open/syncd /u01/data
ora_prd_tmp jfs2 156 156 2 open/syncd /u01/work
ora_prd_data3 jfs2 589 589 5 open/syncd /u03/data3
ora_prd_data2 jfs2 2233 2233 22 open/syncd /u01/data2
ora_prd_redo_lv jfs2 148 148 1 open/syncd /u01/prd_redo
prd_tmp_lv jfs2 222 222 4 open/syncd /u01/prd_tmp
ora_prd_data4 jfs2 4147 4147 32 open/syncd /u01/data4

VOLUME GROUP: ora_prd VG IDENTIFIER: 0025132c00004c00000000fbcc2375cf
VG STATE: active PP SIZE: 128 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 16628 (2128384 megabytes)
MAX LVs: 512 FREE PPs: 136 (17408 megabytes)
LVs: 9 USED PPs: 16492 (2110976 megabytes)
OPEN LVs: 9 QUORUM: 65
TOTAL PVs: 128 VG DESCRIPTORS: 128
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 128 AUTO ON: yes
MAX PPs per PV: 1016 MAX PVs: 128
LTG size: 128 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable



 
One more thought. Looking at some of the PVs below you can see that some are 148 PPs and some 74 PPs. Vpath162 has 136 free PPs. Could I "migratepv vpath163 vpath162"? This shoud free up vpath163 and I could reducevg it and extendvg with on large dmx disk. I could then migratepv several other shark PVs to the new dmx disk. As I reducevg more shark disks I can add more dmx disks. Sound good?

vpath158 active 148 0 00..00..00..00..00
vpath159 active 148 0 00..00..00..00..00
vpath161 active 148 0 00..00..00..00..00
vpath162 active 148 136 30..18..29..29..30
vpath163 active 74 0 00..00..00..00..00
vpath164 active 74 0 00..00..00..00..00
vpath165 active 74 0 00..00..00..00..00
 
Sounds good to me, but the DMX volumes can't be HUGE in size. I see on your lsvg output that you can have up to 1016 PPs per PV, so about 130GB. I'd go for 100GB LUNs (20 LUNs for a 2TB VG).

Oh and you may want to schedule the migratepv (or mirrorvg/unmirrorvg) during the wee hours...


HTH,

p5wizard
 
Thanks P5 good info. If I want to add larger DMX volumes can I use chvg to enlarge the PP size? If I changed the PP size to 256 could I still add 1016 of these larger PPs to a PV.

Also, I have used mirrovg before to migrate large VGs during the day with all users blasting the system. For whatever reason it has never impacted user respone time. I'm guessing that it throttles down or runs at a low enough priority that it has not been an issue. I have never used migratepv. I'm assuming that it will work similar to mirrorvg IE: it will not slow my users/apps. Is this a good assumption on my part.

Thanks
 
AFAIK the PP size can't be changed without re-creating and starting from scratch.

But if you can plan the downtime, of course you can create a new VG and do the migration of the data off-line...

Up to you.


HTH,

p5wizard
 
With some PVs having only 74 PPs along with having 136 PPs free in the VG you should be able to:

1) Free up an existing 74 PP vdisk device by moving the data to the disk with 136 PPs free
2) Remove the vdisk device from the VG (reducevg ora_prd vdiskNN)
3) Add a DMX device to the VG (extendvg ora_prd hdiskpowerNN)
4) migratepv from vdiskZZ to hdiskpowerNN devices again and again and again...

But this depends upon your acceptance to use the migratepv to move the data. Personally speaking "migratepv" has been very robust and completely dependable for me in the past.

Worth considering.

HTH.
 
If you have a problem with the number of PVs, I would go with migratepv. This lets you add/migrate/remove disk by disk, and it can be done with no outage.
 
Thanks guys for the info on migratepv. I have never used this command but it seems similar to mirrorvg in that I can run it during the day without impacting users. I have done major migrations with mirrorvg during the day without impacting users.
 
Well, FWIW...

Depending on the amount of GB you need to migrate, the degree of activity of your disks/LUNs, the amount of patience your users are willing to show, I would NOT recommend running either mirrorvg or migratepv during peak hours. Both these commands are based on the same low-level tools to create (and remove) and synchronize copies of physical partitions. These tools use the same buffers in the disk device drivers to perform their reads and writes as your applications' reads and writes. And if you are occupying buffers with migratepv, mirrorvg/mklvcopy, your users'/applications' reads/writes may spend time waiting for free slots in that buffer space...


HTH,

p5wizard
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top