Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

NFS daemon problem 2

Status
Not open for further replies.

NavinB

Technical User
Apr 11, 2001
277
0
0
GB
Hi ,

I need to find out as to how many clients are accessing the shared filesystems(how many remote mountings) on my RS6000 IBM server(AIX 4.3.3) concurrently.
Also the default value for concurrent nfsd daemons
running on the system is 6 . So if there are more connections than the default value,do I have to re-start the nfsd daemon with more session values????

Any help would be appreciated.

Thanks.....
 
As far as I understand, there cannot be more connections than the set value of nfsd's. So if you have that set to six, then you can only allow 6 concurrent connections. Each biod (NFS client) request requires exclusive us of an nfsd.

nfsd's have very low system resource usage in themselves so increasing them shouldn't adversely affect system performance.

You can use 'smit chnfs' to change the value for immediate effect and from a system boot.

To actually see the stats for your nfs connections, you can use netstat -p udp and look for "dropped due to full socket buffers". If you have many, then you either have too few nfsds to fulfill requests, or you have too small a size for nfs_socketsize, but that's a whole different kettle of fish.

IBM advise that you count up the biod's running on all your nfs clients combined, add 20% and set that many nfsd's on your server.

Hope that helps,
LHLTech

IBM Certified Specialist - AIX System Support
Halfway through CATE exams!
 
Hi ,

Thanks for your reply.....
But as u have written -count up the biod's running on all your nfs clients ....How do I do that....?????
Besides I need to find out how many clients have mounted a particular shared filesystem???How should I find this out?


 
Navin,

To look at the clients NFS mouting from your server, execute :

showmount -a

This will list all remote mounts, there is one caveat to this this is only up to date if you unmount the filesystems properly....so if they are just rebooted the entry in /etc/rmtab will still exist. It also worth looking in /etc/sm, this will list all hosts who have viewed or edited NFS mounted filesystems.

Here is some info. about changing the number of nfsd and biod daemons:

Choosing Initial Numbers of nfsd and biod daemons
Determining the best numbers of nfsd and biod daemons is an iterative process. Guidelines can give you no more than a reasonable starting point.

By default there are six biod daemons on a client and eight nfsd daemons on a server. The defaults are a good starting point for small systems, but should probably be increased for client systems with more than two users or servers with more than two clients. A few guidelines are as follows:

In each client, estimate the maximum number of files that will be written simultaneously. Configure at least two biod daemons per file. If the files are large (more than 32 KB), you may want to start with four biod daemons per file to support read-ahead or write-behind activity. It is common for up to five biod daemons to be busy writing to a single large file.
In each server, start by configuring as many nfsd daemons as the sum of the numbers of biod daemons that you have configured on the clients to handle files from that server. Add 20 percent to allow for non-read/write NFS requests.
If you have fast client workstations connected to a slower server, you may have to constrain the rate at which the clients generate NFS requests.The best solution is to reduce the number of biod daemons on the clients, with due attention to the relative importance of each client's workload and response time.
Tuning the Numbers of nfsd and biod daemons
After you have arrived at an initial number of biod and nfsd daemons, or have changed one or the other, do the following:

First, recheck the affected systems for CPU or I/O saturation with the vmstat and iostat commands. If the server is now saturated, you must reduce its load or increase its power, or both.
Use the command netstat -s to determine if any system is experiencing UDP socket buffer overflows. If so, use the command no -a to verify that the recommendations in Tuning Other Layers to Improve NFS Performance have been implemented. If so, and the system is not saturated, increase the number of biod or nfsd daemons.
Examine the nullrecv column in the nfsstat -s output. If the number starts to grow, it may mean there are too many nfsd daemons. However, this is less likely on this operating system's NFS servers than it is on other platforms. The reason for that is that all nfsd daemons are not awakened at the same time when an NFS request comes into the server. Instead, the first nfsd daemon wakes up, and if there is more work to do, this daemon wakes up the second nfsd daemon, and so on.
To change the numbers of nfsd and biod daemons, use the chnfs command.

To change the number of nfsd daemons on a server to 10, both immediately and at each subsequent system boot, use the following:

# chnfs -n 10
To change the number of biod daemons on a client to 8 temporarily, with no permanent change (that is, the change happens now but is lost at the next system boot), use the following:

# chnfs -N -b 8
To change both the number of biod daemons and the number of nfsd daemons on a system to 9, with the change delayed until the next system boot, run the following command:

# chnfs -I -b 9 -n 9
Increasing the number of biod daemons on the client worsens server performance because it allows the client to send more request at once, further loading the network and the server. In extreme cases of a client overrunning the server, it may be necessary to reduce the client to one biod daemon, as follows:

# stopsrc -s biod
This leaves the client with the kernel process biod still running.

PSD
IBM Certified Specialist - AIX V4.3 Systems Support
IBM Certified Specialist - AIX V4 HACMP
 
Hi,

Thanks to both of u guys....
However I need some more clarification :-
When I use the command showmount -e ,few entries are shown with client server name while others are just shared filesystems(as :/software/oracle8i and as imsu1:/software/oracle8i)
So what does this mean??????
 
Navin,

showmount -e just lists the filesystems exported by the node you issue the command on i.e. the same as exportfs.

showmount -a lists nodename:/nfs_filesystem_mounted, so showmount -e is not what you want.

I hope that clears that up for you.



PSD
IBM Certified Specialist - AIX V4.3 Systems Support
IBM Certified Specialist - AIX V4 HACMP
 
Hey sorry man.....but a typo error...
I actually had given the command showmount -a (This command shows (as :/software/oracle8i and as imsu1:/software/oracle8i)
Any ideas ????
 
Navin,

I suspect these ones are from IP addresses that the host cannot translate i.e. not in /etc/hosts or DNS or whichever name resolution machanism you use. It may well be PCNFS client if you use PC's to mount filesystems.

Cheers



PSD
IBM Certified Specialist - AIX V4.3 Systems Support
IBM Certified Specialist - AIX V4 HACMP
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top