Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

I am newbee in Solaris world ..need help expanding /usr

Status
Not open for further replies.

bolobaboo

MIS
Aug 4, 2008
120
0
0
US
I have solaris 10 and /usr file system is 80%. Don't know how to increase, any help !!
# df -h
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d0 5.9G 399M 5.5G 7% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 15G 1.5M 15G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/dev/md/dsk/d3 4.9G 3.9G 1.0G 80% /usr
/platform/SUNW,SPARC-Enterprise-T5220/lib/libc_psr/libc_psr_hwcap2.so.1
5.9G 399M 5.5G 7% /platform/sun4v/lib/libc_psr.so.1
/platform/SUNW,SPARC-Enterprise-T5220/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
5.9G 399M 5.5G 7% /platform/sun4v/lib/sparcv9/libc_psr.so.1
/dev/md/dsk/d4 4.9G 2.1G 2.8G 43% /var
swap 15G 6.9M 15G 1% /tmp
swap 15G 40K 15G 1% /var/run
/dev/md/dsk/d54 7.9G 4.5G 3.3G 58% /software
/dev/md/dsk/d5 3.9G 1.4G 2.5G 35% /opt
/dev/md/dsk/d53 3.9G 1.7G 2.2G 44% /export/home
# metastat d3
d3: Mirror
Submirror 0: d13
State: Okay
Submirror 1: d23
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 10501632 blocks (5.0 GB)

d13: Submirror of d3
State: Okay
Size: 10501632 blocks (5.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s3 0 No Okay Yes


d23: Submirror of d3
State: Okay
Size: 10501632 blocks (5.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t1d0s3 0 No Okay Yes


Device Relocation Information:
Device Reloc Device ID
c1t0d0 Yes id1,sd@n5000c5000b182573
c1t1d0 Yes id1,sd@n5000c5000b14b1af
#
 
if you were wanting to grow it on the fly using growfs. you can not do it. /usr is one of the filesystems you can not grow.

do a man on growfs

 
Hi
djr11
I see following disk on my system ...
# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c1t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci@0/pci@0/pci@2/scsi@0/sd@0,0
1. c1t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci@0/pci@0/pci@2/scsi@0/sd@1,0
2. c1t2d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci@0/pci@0/pci@2/scsi@0/sd@2,0
3. c1t3d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci@0/pci@0/pci@2/scsi@0/sd@3,0

Don't know how much space used from above disk and how to grow or is there any other way to add space to this file system ?
 
You can only grow metadisks in Solaris Volume Manager if you are using soft partitions.
 
no that is not correct. Read the man on growfs.

I have been growing partitions within SVN for years. Not soft partitions, I do not use then.

 
I guess the best thing is to get back to your issue.

I had the same problem as you with a system I inherited from another group that we took over their server support. Fortunately for me it was /var that needed to be grown, so I just used growfs. Unfortunate for you /usr is one of only a few filesystems you can not grow with growfs.

My suggestion to you would be to use live upgrade to fix this issue, again something we use routinely. first off, do you have any free disks that are the same size as your boot drives?

if so great, it makes it that much easier, if not that is not a big deal.

If you do not have a free disk, the only way to do it is to remove one of your mirrored disks from SVN (EG: c1t1d0) and do a live upgrade against that disk. Note: that before doing the live upgrade against c1t0d0, you have to re-partition the disk to fit your needs.
r
The end result after the live upgrade is that you system will have 2 different boot environment, your current and an alternate. You can then boot into either. Of course after verifying all is well you will need to remove the old boot environment and mirror up that disk against your new boot environment in order to bring raid protection back to your host.
This may sound like a lot, but it literally 5 steps.

1) metadetach all references of c1t0d0, including metadb's. than metaclear them. Note you are now running on a single sided mirror.

2) using "format" re-partition c1t0d0 to your liking. Than re-add to SVN using metastat commands.

3) run lucreate against c1t0d0

4)luactivate new boot environment

5) once confirmation is done that your new boot environment is good. Remove old boot environment, and use that disk to mirror up your new boot environment.

All this is only about 30mins worth of work, very easy. The thing about live upgrade is you are never messing with your actual running boot envirnment (OS), instead you are creating another copy that you can use. so there is no way to mess up.

What makes it easier when you have free drives, you do not have to mess at all with your current boot environment, EG: removing c1t0d0 from SVN and so on.

If you need much more detail if you are interested, let me know and I can walk you threw it. Even being a newbee, as long as you "THINK" before doing, you are fine.


 
by the way, what I wrote is an example using c1t0d0, you may have a reason not to use that disk. again it is an example from what you posted.
 
Hi
djr11
I understood that you first taking out first disk from above list. Then re-partion with bigger partition for /usr
then attach back to mirror with second disk. Sounds easy but scary for me ...can you give me step by step command with option and also verification steps ?

Thank you very much.
 
No, you are not quite getting it.

let's say your root drives were design like this

d0 main mirror for /
d10 - submirror c0t0d0
d20 - submirror c0t1d0

you would detach d20 from d0. leaving you with a single sided mirror that is made up of d10 attached to d0.


using d20 you would use this disk (c0t1d0) to do the live upgrade against. Once you detach it from the mirror, you do repartition as you mentioned, but you DO NOT resync it back up to the mirror d0, but instead once it is re-partitioned, you run the live upgrade against d20.

NOTE: if your boot disk have many SVM paritions, than you would have to deal with these. I am showing you an easy example. In the end, all of c0t1d0 must be free to run the a live upgrade against.

Don't forget about any metadb's that may be on c0t1d0, you have you remove them also. Make sure they are not your only copy though!!

The great thing about live upgrade is as long as you do not mess with the disk that your current OS is running on - EG: d0/d10, than it is virtually impossible to mess up providing you think.

Once you re-partition d20 (c0t1d0), than you run lucreate against that disk. An valid example would be something like

lucreate -n alt_boot_OS -m /:/dev/md/dsk/d20:ufs -m -:/dev/md/dsk/<your swap device>:swap

For your swap device only!!, you can use the same swap device your current boot environment is using- EG: /dev/md/dsk/d# in your lucreate command.

Another thing I would suggest, Sun/Oracle has a patch cluster designed for live upgrade, download and install it before you do anything.


After you were to run the above command against d20, it would complete and then you could run the command "lustatus"

The lustatus command would show you your current and new boot environment you created- EG: alt_boot_OS, it would tell you what one is active and so on.

if you wanted to boot into the new boot environment,you would activate it using "luactivte alt_boot_OS", than you would use "init 6" it MUST use init 6, or shutdown -r.

after the reboot, you would be in the new boot environment that looks just like the other, test and go from there. If you want to go back to the old environment, just re-activate the old boot environment.

The thing to remember, you are never deleting anything, UNTIL you are confident all looks perfect, than you probably will have to remove the old boot environment in order to raid up you boot disks.

There are some very good white pages on this at Sun's/Oracle;s web site. I suggest you read them.

In my environment, we use live upgrade to limit downtime, the way I have my disk design, I do not have to break any mirrors, because of the design I put into place. I do about 80 live upgrades a year. To date, I'm probably at #300 or so. It is a really great product that many people do not know much about.


Do not take any of my examples and attempt to run them, I have given you an overview of what to do, but since I do not know your layout, it is all in theory.



 
Simple query - why do you think having /usr at 80% is a problem?

The internet - allowing those who don't know what they're talking about to have their say.
 
The AIX LVM is the only one that does an exceptional job and excels what it is intended to do.

Vertias is only marginally better than SVM (DiskSuite).
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top