Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chris Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

How to apply OS Kernel patch on solaris 10 server which mirrored with ZPOOL

Status
Not open for further replies.

Allu72

IS-IT--Management
Oct 30, 2014
3
GB
Hi,

please can some one help me in how to apply OS Kernel patch on solaris 10 server with it's root f/s is mirrored with ZPOOL ( zfs f/s)?

I have done this with SVM and VXVM mirrored root f/s's by splitting them to keep one safe partition as a backout plan for the future purpose just incase the patching gives any issues.

Regards,

Allu72
 
I've found LiveUpdate to be the easiest way to do this. Simply create a backup boot environment (man lucreate for details) and then install the patch. If you have trouble you can activate the backup environment and get back to your pre-patch state. Another option is to break the mirror and install the patch. Once you've determined that nothing broke reattach the other disk to resilver the mirror. In the case of an issue with the patch you can shutdown, boot from the non-patched disk, and attach the patched disk as a mirror to get back to your pre-patch state.

Personally, I install the quarterly patchsets and use LiveUpdate to make the process quick and easy. You can create a boot environment and install the whole patchset into it without interrupting users. After activating the new boot environment a simple reboot gets you running with the new patches with minimal downtime.


_______
Linnorm
 
Hi Linnorm,

thank you very much for the update.

you mean, if I detach the mirror, it should work same like SVM and VXVM?

and if I am opting to create an addition backup boot device with lucreate as you suggested, how much aditional space I need and on which file system?

here is the info from the server
root@jee#df -h
Filesystem size used avail capacity Mounted on
rpool/ROOT/s10s_u7wos_08
33G 7.2G 5.3G 58% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 743M 1.6M 742M 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1
13G 7.2G 5.3G 58% /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
13G 7.2G 5.3G 58% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd 0K 0K 0K 0% /dev/fd
rpool/ROOT/s10s_u7wos_08/var
33G 18G 5.3G 78% /var
swap 793M 51M 742M 7% /tmp
swap 742M 40K 742M 1% /var/run
backup 291G 491M 291G 1% /backup
rpool/export 33G 491M 5.3G 9% /export
rpool/export/catalog-DR
33G 7.8M 5.3G 1% /export/catalog-DR
rpool/export/home 33G 45K 5.3G 1% /export/home
nfs02sol 18G 18G 205M 99% /nfs02sol
rpool 33G 94K 5.3G 1% /rpool
root@jee#
===========================================================
root@jee#zpool status
pool: backup
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
backup ONLINE 0 0 0
/dev/rdsk/c7t60060480000287751455534653353235d0 ONLINE 0 0 0

errors: No known data errors

pool: nfs02sol
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
nfs02sol ONLINE 0 0 0
c7t60060480000287751455534653353443d0 ONLINE 0 0 0

errors: No known data errors

pool: rpool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c2t0d0s0 ONLINE 0 0 0
c2t1d0s0 ONLINE 0 0 0

errors: No known data errors
root@jee#

Regards,
Allu
 
As a Solaris administrator, I use live upgrade (LU) to patch twice a year. My root is also on zfs. Live Upgrade on zfs just creates a snapshot. To answer your question about space, initially the snapshot is empty. You can find a really good tutorial on zfs by looking at the man page for it.

You can do this one of two ways:
1) You can just use zfs to create a snapshot for backup and apply patches to your system. But usually if there are kernel patches these are applied in single user mode and that means downtime for the entire time that the patches are being applied.
2) Or you can use LU. It creates a clone and a snapshot for you. This is called an alternate boot environment (ABE) You can apply patches to your ABE during the day and activate it when you have scheduled downtime by simply issuing a luactivate command and rebooting with init 6.

So the command sequence goes something like:
lucreate -n newABE
if you do a zfs list you will see the new zfs volumes and snapshots created for you. snapshots have an "@" in the name.
Now install patches using the -B option. -B says what boot environment to apply the patches to.
./installpatchset -B newABE --s10patchset
Do another zfs list and you will see that newABE has grown by the size of the patches that are installed.
Now activate and reboot
luactivate newABE
init 6

After rebooting, the root you will be using will be on newABE.

To revert back, type luactivate s10s_u7wos_08 and init 6 again.

One very important thing to remember - if you put things like the zip file for the patches on the root file system on the s10s_u7wos_08 boot environment and then delete them, they will go into the snapshot and the space will not be reclaimed (the man page on zfs is very enlightening on this - I cannot recommend it enough). So I highly recommend that you move non-OS stuff onto other file systems. So I have a rpool/patches volume that I put my patches on. By doing this it is no longer part of root and deleting it does not make it go into the snapshot as the snapshot is only of / and /var. I know this sounds a bit confusing but it is actually easy. You might want to google live upgrade as there are tons of documents out there on how to use it.
 
Thank you very much etuser and Linnorm for your time and for your help to my queries.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top