Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Var file system 100% full

Status
Not open for further replies.

071

MIS
Aug 9, 2000
153
Hi all,
Besides large files etc, is there any chance that a process could cause a filesystem to reach its maximum ??

 
Disaster !!
Now the root has also jumped to 100%
 
/var

look in /var/adm/syslog

file there called syslog, you only need the last few pages.

have a look at those (use the 'tail -100 syslog | pg' command) to see if there's anything interesting and the truncate it like this.

rm syslog
touch syslog

in the /var/adm directory tree there are probably other log files as well -- try the same trick on those.

/

root is a different matter but it's probably something similar, difficult for me to say -- look at so-and-so -- though. I don't suppose /tmp is actually part of / is it? Would you like to post the output of 'df'?
Mike
michael.j.lacey@ntlworld.com
 
Hi Mike,
I got the var filesystem down but the root is still at 100%.

Part of the df output is displayed below.

Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0t0d0s0 539263 485355 0 100% /
/dev/dsk/c0t0d0s6 962571 394419 510398 44% /usr
swap 2171112 120 2170992 1% /tmp

One other thing, I cannot seem to apply patches on the box.
(Solaris 2.6 patch program)
The program runs and seems to finish but when I reboot, it still hasn't taken effect. (uname -a)
Is there a danger of the server crashing with the root slice being 100% full ????




 
Also when I log on to the box it says that it 'could't set local correctly'.
 
Also when I log on to the box it says that it 'couldn't set local correctly'.
 
071,

Try this from / as user 'root':
du -k / |sort -nr |pg

This will give you disk usage in Kb sorted largest filesystems first. Go to these filesystems and the files named verify if you need them or not. Check dates last modified, verify if they are used by other files etc... and remove what you don't need. In the case of large text files (*.txt, *.log, etc..) if you want to keep them 'tar' them to tape and either delete them or 'tar' them up and gzip the tarball for storage on the machine. Also you might try doing a find for 'core' files and removing them. Hope this helps.

BTW Mike, /tmp is not part of root it is made of swapspace.
Jon Zimmer
jon.zimmer@pf.net
The software required `Windows 95 or better', so I installed Linux.

 
Thanks Guys
I found and deleted a huge oracle file which has brought it down to 10 % !

ps What exactly are core files and what is a 'core dump' ?
 
Hi 071 !

As far as i know , coredumps r a copy of the memory contents ( RAM ) at the time of the crash (application program crash or Operating System crash) which can be used for analyzing the problem causing the crash. Any other ideas???


rajeshr
:cool:
 
Rajeshr,

Your definition of a coredump is correct. The core files left behind can be very large and unless you know how to analyze them (I don't) they are just taking up space. The only thing I've ever done with a core file is zip it and mail it to IBM or SUN for analysis on problems with my employers' machines. On my personal machines I just delete them. On my company's machines the only core files I'll keep for a while are core files from system crashes or major application problems. Sometimes a program will segfault (coredump) and write a core file to your home directory (this happens alot with X-apps and TCL programs). Those files are useless unless you're a programmer who can analyze them and fix the problem.
Jon Zimmer
jon.zimmer@pf.net
The software required `Windows 95 or better', so I installed Linux.

 
Hi,
Generally other than core files you may also have machine code(object files) with an extension *.out . So it's better to search for the large files using "find" command. And take action on those files.

The command looks like

find / -mount -size +100 -print > large_files

Suresh.
sureshcmc@mailcity.com
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top