Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

using I/O pacing

Status
Not open for further replies.

holdahl

IS-IT--Management
Apr 4, 2006
213
NO
Does anyone have any suggestions on how to use the concept of I/O pacing when it comes to tuning disk performance?

I'm wondering if there are any rules on what to put for the min and max- pout for filesystems.

Any suggestions?

# mount -o minpout=0,maxpout=0 /u


-holdahl-
 
I/O pacing! why do you want to use this? I think It is not recommended unless you have to!

Regards,
Khalid
 
Because I was told that I might try this from IBM support...

Has anyone used this concept?
I want to use it for a backup filesystem which generate very high I/O and causing system degradation.


-holdahl-
 
I hope you find this useful!

Code:
Controlling system wide parameters
There are two parameters that control the system wide I/O pacing:
- maxpout: High-water mark that specifies the maximum number of pending I/Os to a file
- minpout: Low-water mark that specifies the point at which programs that have reached maxpout can resume writing to the file
The high- and low-water marks can be set by:
- smit -> System Environments -> Change / Show Characteristics of Operating System (smitty chgsys) and then entering the number of pages for the high- and low-water marks
- chdev -l sys0 -a maxpout=NewValue
chdev -l sys0 -a minpout=NewValue
Controlling per file system options
In Version 5.3, the I/O pacing can be tuned on a per file system bases. This tuning is done when using the mount command.
For example, mount -o minpout=40,maxpout=60 /fs
Another way to do this is to use SMIT or edit the /etc/filesystems.
Default and recommended values
The default value for the high- and low-water marks is 0, which disables I/O pacing.
Changes to the maxpout and minpout values take effect immediately and remain in place until they are explicitly changed.
It is a good idea to make the value of maxpout (and also the difference between maxpout and minpout) large enough so that they are greater than 4*numclust. This way sequential write-behind won’t be suspended due to I/O pacing.
The recommended value for maxpout should be (a multiple of 4) + 1 so that it works well with the VMM write-behind feature. The reason this works well is for the following
interaction:
1. The write-behind feature sends the previous four pages to disk when a logical
write occurs to the first byte of the fifth page (JFS with default numclust=1).

2. If the pacing high-water mark (maxpout) were a multiple of 4 (say, 8), a process would hit the high-water mark when it requested a write that extended into the
ninth page. It would be then put to sleep before the write-behind algorithm had a chance to detect that the fourth dirty page is complete and the four pages were ready to be written.
3. The process would then sleep with four full pages of output until its outstanding writes fell below the pacing low-water mark (minpout).
4. If on the other hand, the high-water mark had been set to 9, write-behind would get to schedule the four pages for output before the process was suspended.
While enabling VMM I/O pacing may improve response time for certain workloads, the workloads generating the large amounts of I/O will be slowed down because the
processes are put to sleep periodically instead of continuously streaming the I/Os.
Disk-I/O pacing can improve interactive response time in some situations where foreground or background programs that write large volumes of data are interfering with
foreground requests. If not used properly, however, it can reduce throughput excessively.
Regards,
Khalid
 
thanks for the help, but solved my problems by tuning some of the parameters in VMM.

-holdahl-
 
As i expected that it is not IO pacing!

Good luck :)

Regards,
Khalid
 
no, seemed to solve the problem by tuning the default values of maxclient%, maxperm% and minperm%.


-holdahl-
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top