We have a solaris 10 server with 512Gig of RAM.
It is running a database, so most of that is allocated to the DB. There is however still 50Gig free.
When we copy large dump files (50-100Gig) from the local disk to an NFS mount we can see a high scan rate and the database starts to have issues.
When running vmstat you can see that when the copy starts there is 50Gig free. This slowly reduces till it gets to about 8gig free (LOTSFREE). This is 1/64 of the server memory. It is at this point that the page scanner starts scanning and we get issues.
We are speculating that the copies are being mmaped whilst being copied so that they are not being freed automatically when the freelist gets low? Or that the data in the segmap is not being move to the cachelist fast enough? Read this but has not helped:
We really do not need these files cached as are only read once!
Any ideas how we can prevent the page scanner running in this situation?
It is running a database, so most of that is allocated to the DB. There is however still 50Gig free.
When we copy large dump files (50-100Gig) from the local disk to an NFS mount we can see a high scan rate and the database starts to have issues.
When running vmstat you can see that when the copy starts there is 50Gig free. This slowly reduces till it gets to about 8gig free (LOTSFREE). This is 1/64 of the server memory. It is at this point that the page scanner starts scanning and we get issues.
We are speculating that the copies are being mmaped whilst being copied so that they are not being freed automatically when the freelist gets low? Or that the data in the segmap is not being move to the cachelist fast enough? Read this but has not helped:
We really do not need these files cached as are only read once!
Any ideas how we can prevent the page scanner running in this situation?