I have problems in performance. According to I have read if I have 1.5G in memory, BUFFERS should be in 150000. Is it correct? If I do that, could it get increase the checkpoint time. What can I do then? Thank you for your help.
there are several thing to check in your performace issues:
Do an onstat -p (as was already suggested) NOte the following items :buffreads %cached bufwaits
the bufwaits should be LESS THAN 1% of buffreads
You also need to check your check point interval (in seconds)
Also do an onstat -F Note the number of FG,LRU and Chunk writes. Chunk writes are the most efficient.
To adjust this, lower the values for LRU Max dirty and LRU Min Dirty. I suggest starting at 10 and 5 for these. This will help keep your LRU cache "cleaner" and drastically lower your checkpoint times - by lessing the amount of data that needs to be written out to disk during a checkpoint. Also check the number of page cleaners and LRU queues. Keep both values the same - 127 is the max.
As for the number of buffers - I have 4Gb Ram (8 databases, about .75gb ea, and 80 users.) I have 175000 Buffers. Your choice really - how much use are you going to have - I'd start by dropping that number to 50000 and working up from there -
My checkpoint times rarely exceed 4 seconds......
Last question - UNIX or nt ?? My suggestions are based on my experience with our SCO Unix boxes running 7.31 UC5 on OpenServer 5.0.6
Hope this helps..... There are a lot of variables ot take into consideration when perf. tuning.....
You also didn't mention what kind of performance increases you were looking for, and in what area of performance.
One correction to the above, the buffer wait number is an absolute number and is relatively meaningless. What you need to look at is the buffer wait RATIO as follows: ratio = (bufwaits / pagereads + bufwrites) * 100. A ratio above 20% needs to be addressed, between 10% and 20% should be monitored.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.