anyone know the equivalent vmo parameter for pgs_thresh (vmtune -H) under AIX 5.2 and beyond?
here's why I'm askin'
-----------------------------------------
Austin CMVC
Doc Date: 2004/10/21 Doc Key: O447046
APAR_aix_51 = IY60109
Customers free list drops to 0 and the system pages
heavily. They are well above maxperm and the free list
values are set approprately - very high.
The system is unable to meet the memory demands placed on
it when under heavy workload. The workload on this system
varies depending on hour, day and time of month.
FIX REQUESTED:
--------------
The sequential i/o neds to be clipped allowing the system
to keep up with the demand for memory. The problem with
clipping the i/o rate is that the system when more lightly
loaded has capactiy to provide better thoughput.
The approch tested in performance is to reduce read-ahead
when the system is heavily loaded and increase the
read-ahead when the system is lightly loaded.
The system needs to clip the read-ahead when the free list is
in danger, and this requires a third read-ahead value.
The additon of a third read-ahead value would allow the
system to change the number of pages read ahead in the
case the systems memory is stressed. When this is not
occuring the sequential read-ahead can be increased back
to the max setting.
When the system detects that the free list has dropped
below minfree - (maxfree - minfree), the third read-ahead
value will be checked. If not 0 the max read-ahead value
will be set to the new value. When the freelist reaches
maxfree the original max read-ahead value will be restored.
In the case that the free list still drops to 0, if the new third
read-ahead value is not 0, the max read-ahead value will be further
reduced by setting the max read-ahead value to the min read-ahead
value. When the free list returns to maxfree the read-ahead max
value will be set to it's original value.
The goal is to scale back pageahead when the system is running low on
free frames, in order to avoid pre-paging memory that will then need
to be forced back out (or will force other memory out) when the LRU
daemon runs.
memp_pgs_thresh is a per-mempool threshold, tuned as a percentage of
total memory. If the number of free pages in a mempool drops below
this threshold, then pageahead will be lineraly scaled back, using a
global "pf_cutpgahead". The check of numfrb versus memp_pgs_threah is
done on every page fault that runs v_spaceok. It is assumed that the
mempools will be kept roughly in balance, so that one mempool will not
be significantly below the threshold while another is above it.
The pageahead code in vpageahead will use pf_cutpgahead, and will use
it to linearly scale its computed value of the number of pages to
pageahead, whether they were obtained for a JFS segment, a client
segment using its own readahead algorithm, or an async client segment.
Client filesystems will not need any code modifications; they can
continue to assume that the pageahead amount they request is
satisfied.
This behavior will be controlled by the vmtune -H option, or vmo -o
<pgs_thresh>% option.
--------------------------
here's why I'm askin'
-----------------------------------------
Austin CMVC
Doc Date: 2004/10/21 Doc Key: O447046
APAR_aix_51 = IY60109
Customers free list drops to 0 and the system pages
heavily. They are well above maxperm and the free list
values are set approprately - very high.
The system is unable to meet the memory demands placed on
it when under heavy workload. The workload on this system
varies depending on hour, day and time of month.
FIX REQUESTED:
--------------
The sequential i/o neds to be clipped allowing the system
to keep up with the demand for memory. The problem with
clipping the i/o rate is that the system when more lightly
loaded has capactiy to provide better thoughput.
The approch tested in performance is to reduce read-ahead
when the system is heavily loaded and increase the
read-ahead when the system is lightly loaded.
The system needs to clip the read-ahead when the free list is
in danger, and this requires a third read-ahead value.
The additon of a third read-ahead value would allow the
system to change the number of pages read ahead in the
case the systems memory is stressed. When this is not
occuring the sequential read-ahead can be increased back
to the max setting.
When the system detects that the free list has dropped
below minfree - (maxfree - minfree), the third read-ahead
value will be checked. If not 0 the max read-ahead value
will be set to the new value. When the freelist reaches
maxfree the original max read-ahead value will be restored.
In the case that the free list still drops to 0, if the new third
read-ahead value is not 0, the max read-ahead value will be further
reduced by setting the max read-ahead value to the min read-ahead
value. When the free list returns to maxfree the read-ahead max
value will be set to it's original value.
The goal is to scale back pageahead when the system is running low on
free frames, in order to avoid pre-paging memory that will then need
to be forced back out (or will force other memory out) when the LRU
daemon runs.
memp_pgs_thresh is a per-mempool threshold, tuned as a percentage of
total memory. If the number of free pages in a mempool drops below
this threshold, then pageahead will be lineraly scaled back, using a
global "pf_cutpgahead". The check of numfrb versus memp_pgs_threah is
done on every page fault that runs v_spaceok. It is assumed that the
mempools will be kept roughly in balance, so that one mempool will not
be significantly below the threshold while another is above it.
The pageahead code in vpageahead will use pf_cutpgahead, and will use
it to linearly scale its computed value of the number of pages to
pageahead, whether they were obtained for a JFS segment, a client
segment using its own readahead algorithm, or an async client segment.
Client filesystems will not need any code modifications; they can
continue to assume that the pageahead amount they request is
satisfied.
This behavior will be controlled by the vmtune -H option, or vmo -o
<pgs_thresh>% option.
--------------------------