Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Finding process writing to a ghost file? 1

Status
Not open for further replies.

hronek

MIS
Nov 1, 2001
14
US
I had a file that started filling up by Gbytes and could not find out the process that was doing it. Someone must have removed the file but it was still out there as a ghost file. I could not find it by du -sk or ls. How can I find this file and map an OS process to it?
 
Hi,

Can you not use "fuser filename"? It will give you a process associate with it.

regards,
dbase77
 
We do not know what the file name is because we can not see a file. Can you use fuser without knowing the filenames? We are assuming that there are a number of ghost files since the inodes are very high in the directory that we think the ghost file existed.
 
Hi,

Ghost file? ps -ef and check any unusual process since you couldnt see this file. At least as admin you should know what is running on your system.

regards,
dbase77
 
On this system there are 22 development and test databases with over 1000 processes running at any time. We did take an educated guess and killed a process that brought it from 50G at 100% being used to less than 20G being used. I want to resolve this issue easier next time and find out what these ghost files are and map them to a process that I can kill.
 
When I say ghost file it is a file that you can not see by ls but it is taking up inodes and space on a mount point.
 
hi,

Very hard to find this ghost file. But I wonder if you know the inode, as you can use "find" to search for filename associate with this inode. Have you try this?

dbase77
 
The command [tt]pfiles[/tt] will show open files belonging to a process, even if they don't appear in an [tt]ls[/tt].

Something like...
[tt]
$ pfiles 11662
11662: ../command -options parameters
Current rlimit: 256 file descriptors
0: S_IFCHR mode:0620 dev:32,1 ino:302507 uid:1001 gid:7 rdev:24,2
O_RDWR
1: S_IFREG mode:0644 dev:32,7 ino:101768 uid:1001 gid:200 size:2288141
O_WRONLY|O_LARGEFILE
2: S_IFREG mode:0644 dev:32,7 ino:101768 uid:1001 gid:200 size:2288141
O_WRONLY|O_LARGEFILE
3: S_IFREG mode:0644 dev:32,7 ino:101762 uid:1001 gid:200 size:0
O_WRONLY|O_LARGEFILE
5: S_IFDOOR mode:0444 dev:230,0 ino:19882 uid:0 gid:0 size:0
O_RDONLY|O_LARGEFILE FD_CLOEXEC door to nscd[192]
8: S_IFREG mode:0644 dev:32,1 ino:50603 uid:0 gid:1 size:28
O_RDONLY
$
[/tt]
You have to supply the process ID, so maybe write a little script to go through all PIDs and grep/awk for "[tt]size:nnnn[/tt]" where the [tt]nnnn[/tt] is over a certain size.

This won't show the file, but it should give you the process that's eating up the disk space.

Hope this helps.

 
lsof is also very useful in these situations, and you can tell it to only look at a specific directory.

Some Oracle reporting processes (usually called ar25run when I come across them) seem to create temporary files and unlink them immediately, presumably so that if the process ends unexpectedly the file is cleaned up "automatically". But it's a pain for us Unix admins, especially when those temporary files become 2GB+.

Annihilannic.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top