Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations dencom on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

speeding up a find routine on from a remote client

Status
Not open for further replies.

Jimbo2112

IS-IT--Management
Mar 18, 2002
109
GB
Hi All,

I have written a c-shell script that incorporates a find routine. The find command is searching a tree for directories and then copying them to a temp holding area for further use later. This is the command

find $jobroot/GRP_$jrnl -type d -name "DIV_$file" -exec cp -r {} $temp/ \;

When I telnet from a terminal to the server (where all the files to search are held) and run the script, it is quite fast. If I run the script from a client not logged into the server the find routine takes an age. Is there any way that I can speed this up? Or am I doomed to the remote search being very slow?

Cheers

Jimbo
 
Is the temp directory on the server ?
Is the server filesystem mounted with NFS ?
Are the copied directories big (du -sk)?
Is the find only (with no cp -r) fast ?

 
Hi,

The temp directory is local to the client that the script is being run from

Yes, NFS is being used

Not massive, not even very big

The basic find routine (no cp -r) is much faster.

Reading into what you have said makes me think that I should rcp the result of the find command to the server instead of back to the client local?

I will try this and let you know! (unless you have a better idea!)

Cheers

Jimbo
 
If you want the files to be on your client you are doomed: the transfert of file content over the network must be the problem.

So you could use a temp directory on the server, but ...
I fear a copy to a server temp directory will not speed the process.
It should even be slower: your files are currently transfered on the network from the server to your client local directory. If you copy (just
Code:
cp
)from the server to the server using NFS, the content of the files will be transfered twice, from the server to a 'temporary area' on your client and back to the server.
Maybe
Code:
rcp -r server:fromdir server:todir
will be smart enough to do the job locally on the server but I am not sure. My man rcp states that:
Third-party transfers in the form:
[tt]rcp ruser1@rhost1:path1 ruser2@rhost2:path2[/tt]
are performed as:
[tt]remsh rhost1 -l ruser1 rcp path1 ruser2@rhost2:path2[/tt]

so it is worth a try.

The solution could be to directly use a remote command. If you can do a rcp you should have rights to do remsh (or rsh on some systems). So
Code:
remsh server cp -r fromdir todir
should be faster.

Oh wait ! I said you are doomed if you want the files on your client but I just think to something: you said your directories are not big but are the files in them numerous ? I mean copying 1000 files of 1Kb is really slower than copying 1 file of 1000Kb. So maybe you could tar or cpio the files (on the server: remsh) before transfering.

Let me know the results of your tests.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top