Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

maximum number of files/subdirectories 1

Status
Not open for further replies.

waseem786

Technical User
Jul 2, 2008
4
US
What is the maximum number of files/subdirectories you can have in a directory (Solaris 10).
 
I can't find any information about that... the only thing I've found is that the maximum number of subdirectories is 32767. How many files do you need? Have you tried testing by just creating many files?

Annihilannic.
 
Funny, I could not either. It is possible that it is only limited by the number of inodes on the filesystem, that is you can put them all in a subdir. But from personal experience, the filesystem will crawl when you get above a few 10K.

 
I no longer have SunSolve access, would you mind pasting the relevant part please (for my benefit and others)?

Annihilannic.
 
Hi Annihilannic, this is the answer from sunsolve

Best regards

The maximum number of directories allowed in the Solaris[TM] Operating
Environment is limited by the LINK_MAX parameter.

This parameter is defined as 32767 in the /usr/include/limits.h
header file and it cannot be changed.

Since each directory (even an empty one) already contains 2 links
(to itself '.', and to the parent '..'), the total available goes
down to 32765. In general, you would be hard pressed to exceed the
total number of directories you can have on a filesystem unless you are
trying to create more than 32765 sub-dirs. in any single directory.

Only sub-directories increase link count within a directory.
Files do not, so there is no defined limit on a number of files in a
given directory.

Since each directory can have 32k of sub-dirs. and you can nest this as far
as you like, there is no pro-forma limit to the number of files and directories
you can have, other than the number of inodes in your file system. The
number of inodes is an unsigned long and that is 4 billion on a 32-bit
system; this means for example, that you would have to have 128,000
directories (not in any one subdirectory) with maximum entries of 32k
in them (to reach this limit). By default you get one inode per 2kb
of file system space (which is, incidentally far more than you'll ever
use on just about any system).

One additional note, the ufs filesystem entries are stored linearly
so the more entries you have the longer it will take to search through
them. This implies that the ufs filesystem is not a good database
since there is no hash algorithm to quickly find entries. You should
think twice about using your filesystem as a form of a database as you
will get much better performance from any of the algorithms used in
real database products.


The DNLC (Directory Name Lookup Cache) maintains entries for the most
recently accessed files and can speed access to files, however even this
does not make the ufs file system a viable database. Solaris 8 and above
has an improved DNLC structure, but the above statement is still valid.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top