I am attempting to develop a structured approach to dealing with a file system at 100% (or near a threshold). I know that proper monitoring should prevent that but bear with me.
I am thinking about 2 situations
1. A file system may have shot up very quickly. In this case I would run a find command to list all file modified in the last 2 days on that file system and manually assess which one it might be the cause.
2. A file system may be getting near to a threshold and may go over soon. I am thinking I would first determine all the directories in that file system (right now I don’t have an easy way to do this), then run a du –hs <directory> on each one to see which ones have the most, then manually dig into each directory recursively looking for old or useless files that can be archived or deleted.
Is there any other suggestion, step or checks that you would do?
I am thinking about 2 situations
1. A file system may have shot up very quickly. In this case I would run a find command to list all file modified in the last 2 days on that file system and manually assess which one it might be the cause.
2. A file system may be getting near to a threshold and may go over soon. I am thinking I would first determine all the directories in that file system (right now I don’t have an easy way to do this), then run a du –hs <directory> on each one to see which ones have the most, then manually dig into each directory recursively looking for old or useless files that can be archived or deleted.
Is there any other suggestion, step or checks that you would do?