Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

The 100 Tb file system

Status
Not open for further replies.

LanderIndra

Programmer
May 23, 2005
6
0
0
ES
Hello everybody,

I have been looking at the GFS properties and I found what seems a very weak point. In this link , it is stated that the maximum size for a GFS running in kernel 2.6 is 8Tb (I suppose they talk about 32 bits structure), which is not enough for the project we are currently working. We would need about 100 Tb, approx.

Is there any way to surpass this limit with GFS running in Red hat Enterprise Linux 4 64 bits?

Which other filesystem could be used with a farm of SunStoredge 3511 to overcome the 8Tb limit?

Has anyone here ever tried to manage a system greater than 8Tb?

(Maybe there are so many questions for a single post ;) )

Thanks in advance for your support,

Lander.
 
In that situation I would redesign the application so that the storage area could be broken down into multiple filesystems using some kind of hashing of the data (by date, or by first letter of filenames, or anything else you choose), because filesystems that size would just be too unwieldy; imagine if you had to run an fsck?

zfs might be interesting for you though because it claims to be virtually limitless and self-maintaining (no fsck), although you would need to be running Solaris 10.

Annihilannic.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top