Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

What Staging Chunk Size?

Status
Not open for further replies.

bbowers

IS-IT--Management
Jan 25, 2006
33
I have Arcserve 11.5 SP1 running on a windows 2k3 server backing up netware 6.5 SP5 across a WAN. We back up around 30 servers a night with this process. 7 jobs with multiple servers in each job to FSD and then to tape. I have heard lowering the chunk size the the FSD configuration will increase performance. Is 64k a good one for this setup?
 
It might or might not help, each installation is different, trial and error is the way to go. Also keep ARCserve up to date.
 
Hi
I've been trying to work this out too. Can someone please explain, or point me in the direction of an explanation of how the chunk size affects performance on FSDs? I've currently left it at the default size of 512k but I want to try to increase the write speed (we're pretty sure we don't have a network bottle neck)....
Thanks
 
davidmichel is right. I tried different chunk sizes and found that different servers reacted different to the different chunk sizes. (We are backing up over 100 servers in Arcserve in various forms). You need to experiment to see what chunk size works best for overall performance.
 
Ok. We are backing up about 20 servers with one job to FSD and then to tape. Only 1 of the servers is a problem because it is half a terrabyte in size. Before we started staging the data, it took 5 hours to backup straight to tape (locally attached admittedly). Now it's taking 10 hours to stage the data across a Gigabit Lan. It takes 2 hours to do all the other servers but this one server, using one stream because it's an agent backup, is a problem.
10 hours is just about acceptable and in our backup window, the trouble is, we want to implement the same solution on a different site where the large server has a terrabyte of data! I still need this to backup within 10 hours.
I will see where I get with trial and error on this, but is the general theory that the smaller the chunk size, the better the performance?

Thank
Ruth
 
Hi, we've got a representative from CA in our company. I asked him what is the best setting for chunk size. He answered, that the chunk size has NO influence on the staging speed! It is said that the difference in speed is in one or two percent per job(they made some testing in CA)! Then I asked that we have a problems with speed while staging (sometimes it's quicker directly to tape - no staging) and he said that it is a general problem with fiber channel connection and that the backup program must wait too long for a disk queue to stage the data - if I got him right. The recommendations for staging is directly attached storage but I was unable to try it yet :-(. Pavel
 
Another common bottleneck can be the cache on the storage arrays, short of fairly major upgrades in the hardware it is something that can't be easily overcome.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top