Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Slow NFS transfer

Status
Not open for further replies.

trifo

MIS
May 9, 2002
269
HU
Hi!

I have two AiX hosts (TL5.3-05), connected using 2 GigaBit Eth cards configured in EtherChannel.

I want to use an NFS mount to share some disk space on node2 to accomodate oracle backups in. The NFS sharing was successful using the following parameters:
Code:
/backup -sec=sys:krb5p:krb5i:krb5:dh,rw,root=elesDB

Also the NFS mount on node1 was successful:
Code:
/backup:
        dev             = "/backup"
        vfs             = nfs
        nodename        = elesDBstandby
        mount           = true
        options         = cio,rw,bg,hard,nointr,rsize=32768,wsize=32768,noac,vers=3,timeo=600
        account         = false

node1: NFS client. The options are needed for oracle to be able to write backups (except for cio)
node2: NFS server


The mount seemed to be successfully done, but the transfer speed is terribly slow, about 3MB/sec.

Checked the network interfaces: both are JumboFrames enabled and MTU set to 9000, all interfaces link an 1000MB/s FullDuplex. Thested raw network transfer using ftp to move data from node1:dev/zero to node2:/dev/null and received a transfer speed of 115MB/sec. Tested local fs speed on node2 using dd to read from /dev/zero and the write it to the fs and succeeded to write 1GB in less than 9secs.

Checked client pages but there were no bottleneck (maxclient=10% and usage is ~3%)

Do you have any idea what to check yet to make it work in a correct speed?

--Trifo
 
besides NFS being a notoriously slow protocol, have you dedicated your NICs to NFS traffic or are they sharing traffic with the system in general. Also have you tried throttling down the MTU as jumbo frames are useless when transferring smaller files.
 
Well, my NIC-s share traffic with mostly an oracle database - including users and archive log transfer to a standby node.

The config is as follows:
PROD node:
production oracle server, connections from appservers mostly
mounts /backup fs via NFS from STANDBY node

STANDBY node:
oracle DataGuard standby node
performs backups tape via network
mounts /backup fs as local filesystem residing on local disks (1,5TB)
shares /backup fs via NFS to PROD node

Both node has a dual 1GB EtherChannel connection to the main Ethernet switch.

By the way, enabling JumboFrames and MTU9000 just holds a higher limit to the packet size than usual. The nodes also are able to send little packets as well, if they want to. Am I right?

 
Two things stand out to me:

1) Is your etherchannel working correctly? Try pulling one of the NICs out of the etherchannel and run with one NIC. See if that helps. Some switches behave badly with etherchannel, and I haven't had much luck getting etherchannel to work across switches. This is an easy thing to do and won't require downtime to test.

2) I don't have any experience with cio on an NFS mount, but doesn't this turn off all the filesystem buffering? We use cio on the oradata directories only, by using namefs to "loop mount" another directory. That way, we get oracle past the buffers with cio to keep it happy, and have a way to quickly copy files into and out of the oradata directories for cold backups, etc with filesystem buffers intact.

 
foobar13: the switches are EtherChannel aware ones and configured to work in that mode. Even though I will try to play some with the NIC-s - etherchannel adapters can be put offline/online in runtime.

The cio mount option was just a "last try", well, this is only needed for Oracle datafiles in our environement as well. It is off now, but no effect.

--Trifo
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top