Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

5.2 to 5.3 upgrade

Status
Not open for further replies.

rondebbs

MIS
Dec 28, 2005
109
US
We are getting ready to upgrade our mission critical 24/7 application from 5.2 ml3 to 5.3 tl7. The application is called Credit Revue and uses a Progress database. It has many complicated components and interfaces. We have had several problems while upgrading test systems. We only have a few hours for an outage window at night to complete the production upgrade.

Based on our upgrades of the test systems we will likely have problems so we are looking at ways to minimize risk. We have a SAN and all file systems are on EMC DMX Symmetrix 1000 storage.

A different possibility is to create a new 5.3 tl7 LPAR. Get everything working perfectly in the new 5.3. On the night of conversion we would simply move all of the 5.2 file systems to the new 5.3 system. I think I could export all the 5.2 AIX Volume Groups with exportvg and then import or recreate on the new 5.3 using importvg or recreatevg. I would need to zone all of the LUNs (symmetrix devices) to the 5.3 system on the night of conversion and run cfgmgr and EMC's powermt config to present LUNs to the new 5.3.

Does this make sense? Any ideas on whether we should stick with the traditional upgrade or simply move all VGs/LUNs to the new "pristine" 5.3?

 
rondebbs I'm not sure how the application really work but if i were you i would (as you did at first) construct a test environment, get the application part of it working (by testing, testing and testing), get the users functional test on the test LPAR and then at the night of the migration, i would expect you have an identical copy of your production LPAR on your test. Then all you have to do is to copy the data over to the new test machine and maybe swap the machines name on your DNS and there you go!

You need to have some of the users to check the data after the migration to get their first shake hand on the approval to go live with the new LPAR.

You fall back plan in this case is just to change the DNS names (which might take few mins) and then give the system back to the users. I think how fast to get the system back is what matter to the users in case the migration (hopefully not) fail.

If you export the LUNs, then there might be some delays in the fall back plan which i don't think you need to have as you will be under pressure!

I hope this works fine for you. Good Luck.

Regards,
Khalid
 
Why not look at using alt_disk_install to clone your rootvg ?

If you have a spare disk, you can clone rootvg to this disk, and boot from it and upgrade. If the upgrade fails, you can change the bootlist to boot from your preserved rootvg. A quick back out.

If you dont have a spare disk, you can split your rootvg mirror and clone to the 2nd disk.

This process works well. We use it as standard for upgrading AIX.

However, if i agree with Khalidaaa and would advise testing of your applications on a test platform with a copy of your data. You should use this test platform to test the upgrade procedure i have just described too.

 
Hello,

If you have access to a NIM Server and the rootvg of the client server is on 2 hard disks, I would use NIM Migration Installation. It works really well! Here are some rough instructions.

1. Backup Data and application files
2. Backup /etc/sendmail.cf file
3. Backup /etc/inetd.conf
4. Backup /etc/rc.tcpip
5. Backup /etc/ntp.conf
6. Backup /etc/ssh/sshd_config.
7. Record the following values:
lsattr – El aio0
8. Save file /usr/lib/security/methods.cfg
9. Obtain any known working 5.3 drivers for your EMC unit
10. Break the rootvg mirror and remove hdisk. e.g. leave hdisk0 as your rootvg and remove hdisk1.
11. Change the client bootlist to point to the single hdisk (e.g. hdisk0)
12. From the nim server perform a command similar to this -
nimadm –l 5304lpp_res –c seahawk –s 5304spot_res –j
rootvg –d <disk removed in step 10 - e.g. hdisk1) -Y

12. Reboot client, it should boot in 5.3 using hdisk1
13. Export data volume group.
14. Remove old 5.2 emc drivers, and install 5.3 drivers
15. import data Vg.
16. As required, restore files or settings from step 1 too 8.

17. Done

The best thing about this is that you don't need to do a mksysb before hand, as if anything goes wrong just boot of hdisk0 which will have the copy of Aix 5.2

Brian

 
You don't mention if your upgrade requires application changes, or new versions of the Progress database. If It does
another option may be, setup lpar with Aix 53 TL7 same hostname, get everything working there, verify the data transfer, with luns you could make a copy of the database luns from the old system and present it to the new system, then to make it production, you would re copy the database,give the old system a temporary ip, and configure the production ip on the new lpar, fallback would be reconfigure new server ip to temp ip, and put production ip back on old server.

Tony ... aka chgwhat

When in doubt,,, Power out...
 
I'm been on vacation so I have been away from email. Thanks for all the suggestions and I will look closely at each one. I did forget to mention that during this 5.3 upgrade we will also upgrade the Progress database from version 9 to 10. Some other components may also be upgraded.

Creating an new LPAR is possible but I will likely need to add more RAM and CPU's to the p670 frame. alt_disk is something we are also pursuing.

We have not used a nim server at this time but realize we need to set that up as we move forward. Exactly what is the command below doing?

nimadm –l 5304lpp_res –c seahawk –s 5304spot_res –j
rootvg –d <disk removed in step 10 - e.g. hdisk1) -Y

Thanks - Brad
 
Hello,

The command below

nimadm –l 5304lpp_res –c seahawk –s 5304spot_res –j
rootvg –d <disk removed in step 10 - e.g. hdisk1) -Y

Tells the nim server to use lpp source 5304lpp_res (Think Installation CDS!), install to server seahawk, spot 5304spot_res (Defines a type of NIM resouce for a diskless/dataless server to boot from and use!), cache files to rootvg (-j rootvg), install on to target disk (hdisk1) and accept all licence agreements.


Brian
 
Hi Brad,

I don't think it is a good idea to do the change on the same LPAR directly! You need an exact copy as test environment! especially saying an upgrade to the database as well is a risky step to be done directly on your prod system!

Regards,
Khalid
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top