Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Directory & file performance

Status
Not open for further replies.

robdon

Programmer
May 21, 2001
252
ES
Hi,

I've been told, and I kind of understand also, that if you have alot of files in one dir then this can have an effect on the performance of opening and accessing the files.

This is because for each open, Unix has to read the 'directory table' to get the inode.

We are using Tru64.

So, the more files in the dir, then the bigger the directory table is, so the longer time it should take in finding the Inode.

However, testing this out by creating a dir with 1000 files in it and then executing the command below, does not show any time differences compared to running it in a dir with only 1 file in it.

time repeat 1000 cat rob.txt > r

So, does anyone know if lots of files in a dir really does slow down or at least use more resources than a dir with less files?

I'm wondering that if we are not seeing any difference because we have alot of space memory and CPU in the machine so maybe its all getting cached.

We have an application that has 1000+ files in one dir, and we are exploring if it will help by moving these files in to dirs so that around 20 or so are in each dir....

Thanks,

Rob D.

ProIV Resource Centre
 
I'd check out what happens if you access a file at random out of those 1000 files.

Chances are your test is unrepresentative, as the system is caching the info you are repeatedly asking for.


HTH,

p5wizard
 
You'll need to identify what type of filesystem you're using. Some modern filesystems like reiserfs do take steps to prevent large directories from being a problem.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top