Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

remove old files

Status
Not open for further replies.

rneve

MIS
Oct 11, 2002
51
EU
Is there an easy way to delete files that older then 1 year. All files created one year ago in a specific directory must be deleted.

Thanks in advance
 
Hi. try:

find <path to dir> -mtime +365 -exec ls -la {} \;

To check that you're picking up what you want, then:

find <path to dir> -mtime +365 -exec rm {} \;

to delete them. If there are too many, try:

find <path to dir> -mtime +365 | xargs rm {}

You can also add a -name '*.txt', say, depending on your
file extension, to pick up only those files you want.




 
I don't want to be nit-picking, but that last one should be

find <path to dir> -mtime +365 | xargs rm

without the '{}'

HTH,

p5wizard
 
To pick the nit further, the -exec parameter for the find command fires once per found file, so there would never be a need to switch to xargs.

:)

Rod Knowlton
IBM Certified Advanced Technical Expert pSeries and AIX 5L
CompTIA Linux+
CompTIA Security+

 
Oo-er, my nits gave been well and picked! Yeah. Too used to the exec format with the {}. I'm almost sure there have been issues with exec and the number of files processed though.
 
but

find <path to dir> -mtime +365 | xargs rm

and

find <path to dir> -mtime +365 | xargs -n 100 rm

use fewer resources than

find <path to dir> -mtime +365 -exec rm {} \;

The -exec rm (3rd) example would start a new rm-process for every file found, as opposed to one rm-process for as many filenames that fit in a command line in the first example or max 100 filenames in the second example...


HTH,

p5wizard
 
I didn't say -exec was the most efficient, just that it works on one file at a time. ;-)

While the xargs approach would use fewer resources, I predict that you won't see much difference between the approaches if you benchmark them. The -exec approach adds the overhead of process startup for each file, but either approach is going to spend a vast (from the processor's viewpoint) amount of time waiting for the disk activity of the unlink() call, which both ultimately use for each file. Given that disk is roughly 100,000 times slower than RAM, xargs would have the same relative effect on performance as running to your car in the morning would have on the length of your drive to work.

At least that's my prediction. ::)

Rod Knowlton
IBM Certified Advanced Technical Expert pSeries and AIX 5L
CompTIA Linux+
CompTIA Security+

 

find <path to dir> -mtime +365 -exec rm {} \;

I prefer this way.
I also like to add "-type f" just to make sures a normal file.
 
Jim asked for testing. Suffice to say that my illusions about -exec complaining about the nunber of files appears to have been wrong, so good catch from Rod.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top