Oo-er, my nits gave been well and picked! Yeah. Too used to the exec format with the {}. I'm almost sure there have been issues with exec and the number of files processed though.
The -exec rm (3rd) example would start a new rm-process for every file found, as opposed to one rm-process for as many filenames that fit in a command line in the first example or max 100 filenames in the second example...
I didn't say -exec was the most efficient, just that it works on one file at a time. ;-)
While the xargs approach would use fewer resources, I predict that you won't see much difference between the approaches if you benchmark them. The -exec approach adds the overhead of process startup for each file, but either approach is going to spend a vast (from the processor's viewpoint) amount of time waiting for the disk activity of the unlink() call, which both ultimately use for each file. Given that disk is roughly 100,000 times slower than RAM, xargs would have the same relative effect on performance as running to your car in the morning would have on the length of your drive to work.
At least that's my prediction. :
Rod Knowlton
IBM Certified Advanced Technical Expert pSeries and AIX 5L
CompTIA Linux+
CompTIA Security+
Jim asked for testing. Suffice to say that my illusions about -exec complaining about the nunber of files appears to have been wrong, so good catch from Rod.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.