Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Working with large directories 1

Status
Not open for further replies.

DanNJ

MIS
Mar 19, 2003
29
US
Working with large directories

I have a SUN blade 2000 running Solaris 8 that works with lots of files each day. I would say around 50 thousand files a day. We need to maintain at least 3 days worth of these files for research purposes if an error occurs.

There are two problems with this.

1. Our crontab fails to delete files older then three days because of the size of the directories. This causes our system too fill up and crash due to low disk space.

2. Some times there are so many files even doing an ls -la takes an hour. This makes researching problems very difficult.

Does anyone have any tips for working with very large directories. Also are there any ways to delete files or directories quickly.
 
Not my OS but is there anyway to interrupt the process every 15 minutes to move directories to a different name and create a fresh one. Would at least give a more managable structure.

Ed Fair
Any advice I give is my best judgement based on my interpretation of the facts you supply. Help increase my knowledge by providing some feedback, good or bad, on any advice I have given.
 
That would mean we would create approx 100 directories a day. And have to maintain 300 at a time. Since grep is not recursive it would make research very difficult.
 
grep is recursive:

cd into top of directory tree. then:

find . -print |xargs grep "mystring"
 
What command are you using in your crontab to delete the files?

Is there a pattern to the names of these files?

The find command should be able to help you to:

classify by age,
sort/parse classigy by filename,
search for patterns for research.

Here is an example of a recursive grep in two parts:
(it also deals with compressed and zipped files)
[tt]
********************
cat rgrep

find . -exec rgrep1 "$1" {} \;

********************
cat rgrep1

text=$1
filename=$2
if [ -f $filename -a -s $filename ] ; then
filetype=`echo $filename |fgrep '.Z'`
if [ -n "$filetype" ] ; then
count=`zcat $filename|grep -i -s $text|wc|cut -c1-7`
if [ $count -gt 0 ] ; then
echo "Compressed:" $filename ":" $count " occurances."
fi
else
filetype=`echo $filename |fgrep 'zp.exe'`
if [ -n "$filetype" ] ; then
count=`unzip -p $filename 2>/dev/null |grep -i -s $text|wc|cut -c1-7`
if [ $count -gt 0 ] ; then
echo "Zipped :" $filename ":" $count " occurances."
fi
else
filetype=`echo $filename |fgrep '.zip'`
if [ -n "$filetype" ] ; then
count=`unzip -p $filename 2>/dev/null |grep -i -s $text|wc|cut -c1-7`
if [ $count -gt 0 ] ; then
echo "Zipped :" $filename ":" $count " occurances."
fi
else
filetype=`echo $filename |fgrep '.exe'`
if [ -n "$filetype" ] ; then
count=`strings $filename 2>/dev/null |grep -i -s $text|wc|cut -c1-7`
if [ $count -gt 0 ] ; then
echo "DOS .exe :" $filename ":" $count " occurances."
fi
else
filetype=`echo $filename|fgrep '\.db$'`
if [ -z "$filetype" ] ; then
filetype=`echo $filename|fgrep '\.bi$'`
if [ -z "$filetype" ] ; then
grep -i -n -s $text $filename /dev/null
fi
fi
fi
fi
fi
fi
fi
[/tt]
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top