I don't think you can make it much faster. Alternatively you could use find /usr -ls and add up the file sizes using awk or something, but I'm not sure whether that will achieve what you require.
thanks for the reply. Yes, you are correct I don't think it will really acheive what I require, because I will be doing a lot of pipes with du (pipes to sort and awk)
You can improve the perf. of your script even more by first storing the output of du in a temp file and instead of doing du over and over just cat-ing the temp file on the sort and awk pipes.
It's a tradeoff between accuracy and strain on IO-system, because you'll be basing your sort and awk pipes on the du output of a while ago, but catting a file is easier than stat-ing each file/dir in a filesystem and counting blocks - even if the data is 95% cached...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.