Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations biv343 on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Optimized file copy over network

Status
Not open for further replies.

AP81

Programmer
Apr 11, 2003
740
AU
Hi,

I have some routines which copy large amounts of files to various servers around the office. An example would be 200 files at 8 to 15 MB each to \\server1\m\

Currently I just use FileCopy however I have seen numerous recommendations that copying files over a network is more efficient using a FileStream.

Can anyone verify this?
 
I don't know how FileCopy works precisely, or FileStream. But I do observe that some of the copy processes I make in my programs are faster than the system copies (DOS or Windows). Even to that end, someone made a program (TeraCopy) to try and speed up the process.

Generally it is because there is a little file buffer being allocated. As one would know about hardware, hard drive access is always slower than memory access, and with drives, there is the matter of the head seeking to a sector, picking up the sector, returning it. With a smaller buffer, this process repeats itself many more times.

I notice in my old DOS programs (especially my disk fill program), that the copy is much faster with a bigger buffer...I'd just blockread/blockwrite to copy my files with a bigger buffer (64K or 128K), not the default (128 bytes in blockread, don't know the default for the others).

But that's the suggestion: Play with making the copy buffer much bigger.

Perhaps that is the difference.
 
are these files compressed?
if not compressing them before you copy them will save some time. but maybe this is not wanted.

other solution is to have a service at the receiving end which implements a custom file copy protocol over TCP.

I did this for one customer and its works quite nice (and fast). the client compresses the data on the fly and the server decompresses.


/Daddy

-----------------------------------------------------
What You See Is What You Get
Never underestimate tha powah of tha google!
 
In implementation it may not matter. If you're using a 100 megabit ethernet connection, then you'll be limited to about 10 MB/sec.

As whosrdaddy suggested, using TCP compressed streaming may help, but then the next bottleneck is right behind that - the harddrive, which will burst up to about 50 MB/sec, but in sustained multiple file reads, you'll probably not get more than 15 MB/sec, unless you have a RAID setup.

I'm just using rough numbers here from memory and experience - things may have changed since I last took measurements.

Compare these numbers to the speeds you're getting currently to see if it's worth the effort of optimizing.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top