Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

DD and Text functions bringing me down! 5

Status
Not open for further replies.

WiccaChic

Technical User
Jan 21, 2004
179
US
Hi all. I run a couple of AIX servers that are primarily database servers and they perform pretty good. But I notice from time to time when our ops use the DD commands to load tapes OR when thety use csplit to divide up a large file - then the system takes a nose dive. It seems to over commit memory and then start paging like mad. Any suggestion on what I should be looking at? Its a 4way with 8 gig of real memory.

 
Sounds like maxperm is set too high, which it often is by default. See my last response in your page thread.

Fun Fact: The default tuning of every AIX box, no matter how large, is optimized for a use as a single user machine.

Rod Knowlton
IBM Certified Advanced Technical Expert pSeries and AIX 5L

 
Good guess, it is set to 80. What should I do if wanted to ramp it down? I dont even know what the command is...and what are safe increments to tweak as I try to find the right setting. I am running AIX 5.2

Thanks Rod!
 
There can be (and have been) entire books written on this topic, but here's a totally warranty free method you might try:

FIRST AND MOST IMPORTANT: print a copy of the output of vmtune without arguments. If you need to switch back to default settings, it'll be easier if you have them in front of you. Otherwise, you'll have to track them down in documentation or reboot.

At some time when everything's running fine, fire up topas.

In the right hand column, Add together the "% Comp" and "% Client" numbers. This is the size of the active (not paged out) portion of your working set, as a percentage of real memory. Round this number up to the next multiple of five, then add five (e.g. if they add to 25.8, you round to 30, then bump up to 35). Subtract this number from 100 and set maxperm to the result using vmtune (in the example, 65.)

vmtune -P 65

This tells the VMM that it can use up to 65% of real memory for file cache on equal footing with computational pages regarding paging, but once that level is reached, it should steal only file pages.

The change will be immediate, and will only last until the next reboot or modification of maxperm. If this gives you the improvement you want, then you can make the change permanent by adding the appropriate vmtune command line to the init table, like:

mkitab "vmtune:2:eek:nce:/usr/samples/kernel/vmtune -P 65 > /dev/console"

Be sure to observe the system carefully through all of its typical workloads before making the change permanent. When you first make the change, you may see more paging in (pi in vmstat or PgspIn in topas.) This is a good thing, but it's not necessarily a bad thing if you don't see it. Ultimately it should settle down. If you have adequate real memory and find the sweet spot, you shouldn't see any computational paging activity (pi/po in vmstat, PgspIn/PgspOut in topas). The PageIn and PageOut numbers in topas should be ignored. They're just regular file activity.

Rod Knowlton
IBM Certified Advanced Technical Expert pSeries and AIX 5L

 
Here is the suggested algorithm for changing vmtune settings:
minfree = 120 * #_of_CPUs
maxpgahead = 8 * #_of_CPUs
maxfree = minfree + maxpgahead

Change minperm to 5 and maxperm to 20, but there is not an algorithm for those. They are just values that usually work best and is a good starting point.

The command to change these settings on a system with 12 CPUs:
/usr/samples/kernel/vmtune -F 1536 -f 1440 -R 96 -P 20 -p 5

***********************************************
Paging Spaces
Strategies and tips to help you handle the downside to excessive paging space
In the first article, we answered the question: “How much paging space do I need to configure on a modern AIX* server?” Here, we’ll cover the downside to excessive paging space use.
Three negative situations can occur when excessively utilizing paging space:
1. The worst symptom of excessive paging space usage is thrashing, which occurs during a memory over-commitment phase. Paging to and from paging space can dominate the system’s workload.
2. The next least desired situation occurs when all of the available paging space slots become full. The Virtual Memory Manager (VMM) maintains two parameters that monitor free paging space slots: npswarn and npskill. When the number of free paging space slots is less than defined by npswarn, a warning is issued to the running processes. However, most processes don’t have traps for this warning and ignore it. When the number of paging space slots reaches the npskill value, processes are terminated until enough paging space is freed. Because the VMM checks to ensure sufficient paging slots are available before allowing a new process, an inability to login may result. This is undesirable, especially when it affects a live database server. The system becomes non-responsive and the database administrator can’t login. Oftentimes, the only recourse is to power off the system.
3. Excessive paging space can affect performance. Fetching a page from memory can be tens of thousands of times faster than fetching from paging space. Furthermore, fetching from paging space is arguably slower than fetching data on DASD for at least two reasons. First, most modern file system logical volumes are striped across multiple drives, and, second, normal file I/O can take advantage of the VMM read-ahead mechanism. Paging space logical volumes can’t use the read-ahead feature. Also, slots in paging space are monitored by external page tables, which act like a book’s table of contents and are pageable. Therefore, you can suffer a double page fault resolution when fetching data from paging space. This happens when the index of the table is fetched from paging space into memory. Next, the data determined by the index search must be fetched from paging space. This results in more than a twofold increase in access time. These events are called backtracks and are recorded in vmstat. The command vmstat -s displays this information since the last boot.
Typically, the primary paging space resides on rootvg and probably hdisk0, both can hinder system performance. I/O from the OS, the JFS log and paging space are in contention for a single disk. Then, as I/Os begin to mount due to excessive paging space usage, hdisk0 can become the system's bottleneck device. In queuing theory, a bottleneck device is defined as the component with the largest service demand, where the service demand is defined as the product of the number of visits multiplied by the component service time. The service time is the time required for the disk to perform its duties. In mathematical terms:

Xsat=1 / Dmax

Xsat is the maximum throughput, in transactions/unit time, and Dmax is the component with the maximum service demand, in units of time.

In other words, the hdisk0 performance can limit the entire system’s performance.

Determing a Paging Problem vs. a Memory Problem
The following two examples illustrate techniques that can determine if incipient paging is caused from inappropriate minperm and maxperm settings or memory over commitment. In both examples, the scenarios are the same. An organization’s server consumes increasing amounts of paging space, until all paging space is consumed. The system becomes non responsive and requires a reboot.

Example 1
While the system is still responsive, a performance analyst logs into the server and requests a few, 60-second intervals of vmstat (see Figure 1), a few snapshots from topas (see Figure 2) and a snapshot of the vmtune settings (/usr/samples/kernel/vmtune) (see Figure 3).

Our analysis begins with the active virtual memory (AVM) column in the vmstat report. The AVM is an indication of the number of (4 KB) working pages in use on the server. This number includes pages in memory and pages in paging space. Total working storage can be exaggerated where large amounts of paging space have been consumed. Once a working page has been placed in paging space, that slot remains allocated until the process terminates or the system is rebooted—even if the page is later paged back into memory. The first step is to compute total size of working storage:

Working Storage = AVM * 4 KB = 1,002,517 * 4 KB = 4,010,068 KB or about 3.8 GB

Next compare this number to the size of real memory, shown in the topas snapshot (Figure 2). Is AVM less than real memory? In other words, is 3.8 GB less than 8 GB? Yes. Does it make sense that 60 percent of 3 GB of paging space has been consumed, but the total working storage is only 3.8 GB, with 8 GB of real memory? At first it may not make sense because there’s more than two times real memory than what’s required by working storage. Therefore, more analysis is required.
Notice that in Figure 2, the “% Comp” memory is about 35 percent and the “% Noncomp” memory is about 60 percent. If maxperm was between 20 and 80 percent, noncomputational pages were stolen-but if the repage rate for noncomputational pages is greater than it was for computational pages, then computational pages would be stolen. Hence, paged out to paging space.

Figure 3 shows output from vmtune. Note that minperm is 20 percent and maxperm is 80 percent.

Therefore, it’s probable that computational pages were sacrificed to paging space over noncomputational pages due to repage rates and these settings. Defining new values for minperm and maxperm can solve this problem. I tend to lower both values and decrease the spread between minperm and maxperm. This can reducing paging consumption.
This example is a real server undergoing what I term “fake” paging. JFS-mounted databases can exhibit this phenomenon because of the double buffering of data that occurs. First, files are read into memory (as persistent storage) at the JFS cache level. Next, the same data is read into the database as working storage. This gives the illusion of each page requiring twice the real memory.

Remember that minperm and maxperm are so-called “soft” values. Generally, the system doesn’t rigidly adhere to the chosen values. The value of maxperm can be set to a rigid number if the switch for strict_maxperm is toggled. (Strict_ maxperm was an attempt to address data double buffering.) However, this option is only recommended in special cases. Additionally, altering vmtune parameters isn't a total solution. It’s possible that reducing paging space consumption (by buffering more working storage in memory) could result in more physical I/O at the file level. This would indicate that some of the data file pages that were paged back onto DASD are needed.

Example 2
We’ll use the same ideas presented in the first example and expand on a few new ones. Examining the AVM column in vmstat (see Figure 4), let’s calculate the working storage consumed on this server.

Working Storage = AVM * 4 KB = 2,573,002 * 4 KB = 10,292,008 KB or about 9.8 GB < /SPAN >
Next, examine Figure 5 and determine the size of real memory, which is 6,655 MB (6.5 GB). Is the amount of AVM less than the amount of real memory (is 9.8 less than 6.5)? No. Therefore, the initial indication is that memory may be over committed.

In Figure 5, notice the computational and noncomputational memory ratios. Computational memory comprises 90 percent of the pages in memory. Little can be done on this server to alleviate paging by way of tuning. Also note that paging space is 5,120 MB (5 GB), with about 95 percent consumed (4.7 GB).

Finally, examine Figure 6. The minperm is set to 2 percent, and maxperm to 10 percent. Because this server has already had these values adjusted substantially, this is another indication that virtual memory tuning won’t help.

Examine the data from the vmstat output (Figure 4). The values under the “po” column represent pages written out to paging space per second (averaged over the 60-second snapshot). These numbers appear to be sustained and are greater than five pages per second. There is also a parameter called the thrashing severity ratio (TSR). The ratio is the quotient of the pages written out to paging space divided by the number of freed pages (“fr” column). The idea of TSR is that when the value is greater than 1/6 (17 percent), thrashing may occur. Let’s compute an average TSR (sum “po” and “fr” and divide both by 5) for the five-minute period.

TSR = po/fr * 100 percent = 67/2062 * 100 percent = 3.2 percent

This value indicates that although the rate of pages written to paging space were quite high, pending thrashing wasn’t evident because the page stealer was able to free pages at a greater rate than what was being paged out.

The last item to examine in Figure 4 is the ratio of scanned pages to freed pages (sr/fr). This value is a relative indication of memory over commitment. It compares the number of pages that were required to be scanned by the page stealer to the amount that were freed. There are no hard numbers on this value, but performance texts read that a number of 10:1 is an indication of over commitment. Let’s compute this average value from the “sr” and “fr” columns. This result is 44,451/2,062 or a ratio of 21:1. Again, this is another indication of memory over commitment.

Finally, evidence indicates severe memory over commitment exists on this server, and memory tuning this server will yield little benefit. An increase in real memory is the recommended solution.
(Note: No settings in vmtune should be modified from their default values without direction from an expert. Also, the parameter maxclient should be set with the -t flag to equal the value of maxperm in AIX* versions later than 4.3.3, ML9.)

Locating Paging Space Issues
Look in the AIX error report to determine if suspected paging space issues are present. The error report in AIX lists messages from the OS. Paging space issues are flagged with a “VMM" in the resource name. The following two lines (see Figure 7) were returned on a suspect server using this command: #(errpt|head -1;errpt |grep VMM).

The command #errpt -aj C5C09FFA was used to obtain information about the VMM report. The server name was changed to protect confidentiality. The details show that “Insufficient Paging Space was Defined for the System” (see Figure 8). This server became unresponsive and was forced into reboot.

Configuring Paging Space
If virtual memory tuning can’t help and real memory can’t be purchased, six points are worth noting about paging space creation.
· Place paging space(s) on dedicated disk(s) to eliminate I/O contention
· Use multiple paging spaces spread over multiple disks
· Make the primary paging space a little bigger than the other spaces that are equally sized
· Disable mirror write consistency (MWC) for paging spaces that are mirrored because this information is transient (it doesn’t exist after process termination or a reboot of the system)
· Use the center intra-disk allocation policy
· Keep JFS logs and paging spaces off the same disk
Paging space consumption can be reduced by adjusting the amount of memory dedicated to noncomputational memory with vmtune and reducing the memory footprint of the applications that are inducing the memory over-commitment.

Rule of Thumb
Considering the considerable affects that minperm and maxperm settings exhibit on paging use, I don’t know of an accurate one-size-fits-all paging space to real memory recommendation. In servers with properly tuned and fitted memory subsystems, paging space use should be minimal with the deferred paging space policy. I’ve observed numerous working-storage intensive servers with 3 GB of memory that function properly with 512 MB of paging space.

Example 1, View of vmstat output (figure 1)
kthr memory page faults cpu
----- ----- --------------- ----- -------------------------------------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
1 2 1002522 136026 0 0 0 0 0 0 261 1848 243 2 3 93 2
0 2 1002517 136016 0 0 0 0 0 0 266 2305 261 3 4 92 2
0 2 1002517 136015 0 0 0 0 0 0 241 629 155 1 2 96 1
1 2 1002517 136065 0 0 0 0 0 0 230 217 131 1 2 97 0
12 2 1002517 136064 0 0 0 0 0 0 242 760 257 5 2 92 1
Example 1, View of topas output (figure 2)
Topas Monitor for host: changed EVENTS/QUEUES FILE/TTY
Wed Jan 22 13:58:23 2003 Interval: 2 Cswitch 227 Readch 466480
Syscal 1663 Writech 7186
Kernel 2.5 |# | Reads 136 Rawin 0
User 1.0 | | Writes 12 Ttyout 226
Wait 0.5 | | Forks 1 Igets 0
Idle 96.0 |###########################| Execs 1 Namei 256
Runqueue 1.0 Dirblk 8
Interf KBPS I-Pack O-Pac KB-In KB-Out Waitqueue 2.0
en1 13.1 29.0 25.5 7.3 5.8
en2 0.2 1.5 1.0 0.1 0.1 PAGING MEMORY
Faults 351 Real,MB 8191
Disk Busy% KBPS TPS KB-Read KB-Writ Steals 0 %Comp 34.9
hdisk2 2.0 6.0 1.5 0.0 6.0 PgspIn 0 %Noncomp 59.5
hdisk1 0.0 0.0 0.0 0.0 0.0 PgspOut 0 %Client 0.5
hdisk0 0.0 0.0 0.0 0.0 0.0 PageIn 0
hdisk3 0.0 0.0 0.0 0.0 0.0 PageOut 1 PAGING SPACE
hdisk4 0.0 0.0 0.0 0.0 0.0 Sios 1 Size,MB 3328
%Used 60.4
db2sysc (91034) 1.0% PgSp: 2.2mb instpcs5 % Free 39.5
topas (69926) 0.3% PgSp: 0.4mb root
asnccp (111440 0.3% PgSp: 6.9mb dpcpcs
db2sysc (8918) 0.3% PgSp: 1.4mb instpcs5 Press "h" for help screen.
db2sysc (104162 0.3% PgSp:10.7mb instpcs5 Press "q" to quit program.
Example 1, View of vmtune output (figure 3)
vmtune: current values:
-p -P -r -R -f -F -N -W
minperm maxperm minpgahead maxpgahead minfree maxfree pd_npages maxrandwrt
419223 1676892 2 8 120 128 524288 0
-M -w -k -c -b -B -u -l -d
maxpin npswarn npskill numclust numfsbufs hd_pbuf_cnt lvm_bufcnt lrubucket defps
1677713 26624 6656 1 93 128 9 131072 1
-s -n -S -L -g -h
sync_release_ilock nokilluid v_pinshm lgpg_regions lgpg_size strict_maxperm
0 0 0 0 0 0
-t
maxclient
1676892
number of valid memory pages = 2097141 maxperm=80.0% of real memory
maximum pinable=80.0% of real memory minperm=20.0% of real memory
number of file memory pages = 1238607 numperm=59.1% of real memory
number of compressed memory pages = 0 compressed=0.0% of real memory
number of client memory pages = 0 numclient=0.0% of real memory
# of remote pgs sched-pageout = 0 maxclient=80.0% of real memory


Example 2, Vmstat snapshot listing (figure 4)
kthr memory page faults cpu
---- --------- ---------------- -------------------------------------------------- --------
r b avm fre re pi po fr sr cy in sy cs us sy id wa
1 2 2572964 30 0 0 164 2022 48605 0 468 324 555 48 10 22 20
1 2 2573002 159 0 0 80 2236 48983 0 4196 6088 7391 48 11 33 8
2 2 2573039 128 0 0 25 1852 38549 0 4183 5078 7233 47 15 30 7
1 2 2573081 126 0 0 26 2405 50866 0 4210 4719 7646 45 10 38 7
1 2 2573096 128 0 0 39 1794 35251 0 4210 5180 7257 43 10 38 10
Example 2, Topas snapshot (figure 5)
Topas Monitor for host: EVENTS/QUEUES FILE/TTY
Thu Jan 16 11:24:17 2003 Interval: 2 Cswitch 8113 Readch 346098
Syscall 10667 Writech 6391
Kernel 10.5 |### | Reads 208 Rawin 0
User 48.5 |############# | Writes 73 Ttyout 103
Wait 8.0 |# | Forks 0 Igets 0
Idle 33.0 |########### | Execs 0 Namei 4
Runqueue 1.0 Dirblk 0
Interf KBPS I-Pack O-Pack KB-In KB-Out Waitqueue 2.0
en1 39.4 27.9 34.4 2.0 37.4
lo0 0.0 0.0 0.0 0.0 0.0 PAGING MEMORY
Faults 2 Real,MB 6655
Disk Busy% KBPS TPS KB-Read KB-Writ Steals 0 % Comp 90.4
hdisk18 0.4 5.9 0.4 5.9 0.0 PgspIn 0 % Noncomp 9.1
hdisk25 0.4 3.9 0.9 3.9 0.0 PgspOut 80 % Client 0.5
hdisk4 0.4 1.9 0.4 1.9 0.0 PageIn 2
hdisk41 0.0 0.0 0.0 0.0 0.0 PageOut 0 PAGING SPACE
hdisk43 0.0 0.0 0.0 0.0 0.0 Sios 2 Size,MB 5120
% Used 94.6
http (30230) 4.8% PgSp:28.3mb notesd % Free 5.3
server (23210) 3.3% PgSp: 8.0mb notesc
server (38058) 1.8% PgSp:19.1mb notesd
server (30942) 1.0% PgSp:10.1mb notesa Press "h" for help screen.
topas (36326) 0.8% PgSp: 0.5mb root Press "q" to quit program.
vmtune: current values:
Example 2, View of vmtune output (figure 6)
vmtune: current values:
-p -P -r -R -f -F -N -W
minperm maxperm minpgahead maxpgahead minfree maxfree pd_npages maxrandwrt
34078 70392 2 8 120 128 524288 0
-M -w -k -c -b -B -u -l -d
maxpin npswarn npskill numclust numfsbufs hd_pbuf_cnt lvm_bufcnt lrubucket defps
1363140 40960 10240 1 93 320 9 131072 1
-s -n -S -L -g -h
sync_release_ilock nokilluid v_pinshm lgpg_regions lgpg_size strict_maxperm
0 0 0 0 0 0
-t
maxclient
170392
number of valid memory pages = 1703925 maxperm=10.0% of real memory
maximum pinable=80.0% of real memory minperm=2.0% of real memory
number of file memory pages = 16017 numperm=9.4% of real memory
number of compressed memory pages = 0 compressed=0.0% of real memory
number of client memory pages = 1252 numclient=0.1% of real memory
# of remote pgs sched-pageout = 0 maxclient=10.0% of real memory

Figure 7:
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
C5C09FFA 0121223203 P S SYSVMM SOFTWARE PROGRAM ABNORMALLY TERMINATED
Figure 8:
Details from the Error Report
LABEL: PGSP_KILL
IDENTIFIER: C5C09FFA
Date/Time: Tue Jan 21 22:31:35
Sequence Number: 4205
Machine Id: 0009A88A4C00
Node Id: changed to protect the innocent
Class: S
Type: PERM
Resource Name: SYSVMM
Description
SOFTWARE PROGRAM ABNORMALLY TERMINATED
Probable Causes
SYSTEM RUNNING OUT OF PAGING SPACE
Failure Causes
INSUFFICIENT PAGING SPACE DEFINED FOR THE SYSTEM
PROGRAM USING EXCESSIVE AMOUNT OF PAGING SPACE
Recommended Actions
DEFINE ADDITIONAL PAGING SPACE
REDUCE PAGING SPACE REQUIREMENTS OF PROGRAM(S)
Detail Data
PROGRAM
sleep
USER'S PROCESS ID:
47840
PROGRAM'S PAGING SPACE USE IN 1KB BLOCKS
0
***************************************************
Data is essentially held in pages of 4096kb, and a page in RAM is accessible by the CPU, if the page is on disk the CPU can't access it directly.

A page fault occurs when a wanted page address does not translate to a real memory address. At this point the Virtual Memory Manager (VMM) knows it needs to get data from disk and place it in RAM - it therefore checks to see that there is space in RAM in which to out this data.

If there's enough room, VMM checks to see if the wanted page has been used previously by this process:

- if not, an "initial page fault", VMM allocates _two_ pages for the data; one in RAM and the other on a backing page on disk where it can go if it has to be temporarily removed from RAM. This is known as "late page space allocation".

- if it has, a "repage fault" I/O is scheduled to bring the data back from disk and into RAM - the act of resolving this repage fault is called a "page-in" (the process that is waiting for this to happen is in a "page wait state").

So what happens if there's not enough room in RAM to put the page? Well the page stealer is there to ensure that there is a supply of free RAM pages available for an initial page fault. If the number of free RAM pages drops below a specified value then the page stealer will try and get some pages back. It keeps on stealing pages until it reaches an upper limit.

So how does it decide which pages to steal? The page stealer will select the least recently used, or LRU, pages. If the page has been modified in RAM it's classed as a dirty page and is put to a backing store (either page space or a filesystem); if it's clean (the copy in RAM matches the copy in page space) then the RAM page is purged.

Note that the page space is used for non-persistent or working pages, and the filesystem is used for persistent or file pages.

There is, of course, a basic assumption here that all stale pages are treated equally, i.e. whether it's a file- or nonfile- page makes no difference to the page stealer.

However this is not the case. Increased paging activity makes VMM act upon the different types of (stale) pages in a different manner. When the number of [stale] file pages exceed a number - set by the maxperm threshold - the page stealer will steal only file pages.

If the number of stale file pages is below maxperm (but above the set minperm threshold) then two other considerations come into play.

The VMM checks the repage rates of both file and nonfile pages, and will steal file pages if the file page repage rate is higher than the repage rate for nonfiles.

If this not the case then both types of pages are treated as equal victims.

PERFORMANCE HITS / ACTUAL DISK I/O...

To understand the performance hit of the paging figures that you come across, you need to realize that page faults do NOT (necessarily) result in disk activity. Remember from above that only the repage fault - the act of bringing back previously used data into memory - causes disk I/O to be scheduled.

Page out I/O only occurs when a page is stolen by the page stealer AND is marked as 'dirty'. This only happens when there is a shortage of free RAM pages. Hence the page-out figure can be an indicator of how memory constrained the system is. The vmstat command is only of limited use as it just reports activity concerned with page space (and not paging to/from filesystem space).

If the system consistently appears to hover around the minperm value (the "fre" column in vmstat) then it does not follow that the system is memory constrained - consider the scenario where an initial page fault is resolved by purging a clean, but stale, page. In this there is paging activity but no corresponding I/O.

System performance may be improved by reducing the amount of RAM that file pages occupy - this ensures that working pages are not continually being pushed out to make way for file pages.

This can be achieved through the use of the vmtune command (/usr/samples/kernel) and DECREASING values for minperm and maxperm.

PAGING SPACE

So how much page space do I need? For systems that have up to 256MB of real memory, use the well known formula...

page_space = 2 x real_memory

...for those systems with more than 256MB of real memory use...

page_space = 512MB + (real_memory - 256MB) * 1.25

The following should also be adhered to where possible:

1. configure just one paging space per disk
2. use between 2 and 6 paging spaces in a medium size system
3. configure the paging spaces on each disk to be the same size
 
screwloose wrote:

>Change minperm to 5 and maxperm to 20, but there is not an
>algorithm for those. They are just values that usually work
>best and is a good starting point.

Those settings would kill my production server's performance.

Just as there is no algorithm, there are no values that "usually work best". If there were, they would be the default. VMM tunings, especially minperm and maxperm, can only be best determined by a careful analysis of the workload and an understanding of the workings of the VMM. Tom Farwell's
RS/6000: Understanding Hardware, AIX Internals, and Performance: Professional Reference Edition
would be a good resource for cultivating that understanding. His sessions at the 2002 RS6000 Technical University were some of the most informative on the inner workings of AIX that I've attended.

Speaking of Tom Farwell, unless you are he, you really should have linked to his article instead of cutting and pasting it.

Rod Knowlton
IBM Certified Advanced Technical Expert pSeries and AIX 5L

 
I believe it was an article that I saved from an IBM site. Secondly, the algorithm as a starting point came from IBM.

“Just as there is no algorithm, there are no values that "usually work best". If there were, they would be the default.”

And why do you think they have those defaults? And why do you think IBM suggest you NOT change them unless you know exactly what you are doing?

You seem to be a “know-it-all” jerk, Rod.
 
I'm sorry you feel that way, screwloose.

Just to be clear, I wasn't talking about the algorithms for minfree, maxpgahead, and maxfree. I was only referring to the quoted statement that, without analysis or understanding, minperm and maxperm should be set to 5 and 20.

I agree with IBM that you should NOT adjust VMM settings unless you know what you're doing. This is why I linked to the performance redbook before presenting a very conservative method for adjusting just one parameter, hopefully with sufficient explanation of the concepts and reasoning behind it.

If you're offended by my wanting proper attribution of copyrighted material, you'll just have to be offended. It's not that hard to look up sources or, if you cannot locate them, preface the quote with a statement to that effect.

Rod Knowlton
IBM Certified Advanced Technical Expert pSeries and AIX 5L

 
Now Now put your handbags away gals........

I guess what both the guys above are saying is that it's all a bit suck it and see, never make any changes to a production server, if you have to do it out of working hours and after doing a backup/mksysb



-
| Mike Nixon
| Unix Admin
-------------
 
If we used belts instead of handbags, we could get on ESPN8 "the ocho". :)

That's a Dodgeball reference, for those that had more intellectual things to do this weekend.



Rod Knowlton
IBM Certified Advanced Technical Expert pSeries and AIX 5L

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top