Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

I/O performance question with 90 - 100% filesystem full

Status
Not open for further replies.

Sudmill

Programmer
Apr 20, 2001
65
GB
Hi

Im trying to get a definitive answer on the following, but cant seem to locate a clear answer anywhere.

We have many filesystems on an AIX 4.3.3 server which have nothing but Oracle datafiles on. These files are fixed in size and are manually extended as required (autoextend off).

What Id like to know is are there any performance issues with have a filesystem at between 90% and 100%, (or other issues). Any links to articles (esp IBM) would be grateful on this one!

To clarify heres an example;

In the /u05 filesytem there are several datafiles;
orcl1data1.dbf
orcl1data2.dbf
orcl1data3.dbf

These datafiles up 100% of the fielsystem space. Would there be any performance benefit of always ensuring that a filesystem doesnot grow above a certain threshold percentage?

Thanks in advance,
Cheers

John (Sudmill)
 
Since the database files are growing, the disk I/O performance shall degrade because of increased number of transactions on the databases.

An option to improve the overall system performance could be moving the database files to another disk..Also, prefer placing the datafiles, redo-log files, and archiver files on seperate disks (if u have these many diskettes!)

Regds,
- Hemant
Networking and Systems Integration Group
Satyam Computer Services Ltd
India
 
There will be performance issues, but these will be related to how full your tablespace data files are not how full the filesystem is.

For example I could add a new datafile to a tablespace to increase the space and take up 100% of a 2GB partition(e.g.)

The I/O rate on the partition will be dependant on how full the data file is. In this case it will be empty so there will be little I/O until segments are created in the datafile. As the use of this datafile grows so the I/O performance will reduce.

In your example it would be better admin-wise to have one large datafile and slightly better performance-wise (checkout checkpoints in the manuals).

Alex
 
I can't seem to find the official note on it right now but I believe that the 10% overhead you should keep free is maintained by the LVM without need of specifying it. It is still a good idea to keep at least a few blocks free in the fs, even though the LVM does a nice job of insulating you from disk allocation issues. IBM Certified -- AIX 4.3 Obfuscation
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top