Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Excessive backup times

Status
Not open for further replies.

simple99

MIS
Jun 18, 2007
71
AU
Hi,
Does this look like excessive time for a weekly full backup on our SAN which takes 33 hrs for 1205 GB.
The event log shows:

Backup job [] completed. Client [], Agent Type [Windows 32-bit File System], Subclient [],
Backup Level [Full],
Objects [2564520],
Failed [61],
Duration [33:25:53],
Total Size [1205.19 GB],
Media or Mount Path Used [].

What could be the possible causes of this ?

Appreciate any assistance with this.
regards
 
That's 36G an hour depending off what your reading off and what your writing to it could be good or bad, hard to say.

 
As theravager says it depends a lot on what you're reading from and writing to, but gut reaction is it's damned slow.

My guess would be that you're backing up a single stream at a time?

It can take some tweaking of streams and data readers to get decent throughput I found, but if you throw enough readers at it you should be able to pretty much saturate your tape drive(s).
 
thanks for responses,
This full backup is Disk-Disk.

Then it takes about as mush to do an Aux from Disk-Tape.

so i should be looking at what changes I can make to the streams and readers in order to get a better throughput.

 
Hi,
Here's my current settings and hoping if you spot something out of the ordinary you can comment.

The Subclients settings:
- No of Data Readers = 5
- Allow multiple data readers within a drive or mount point(checked)

Storage Policy Properties:
- Device Streams = 1

Magnetic Library Properties (Mount Paths tab):
- Maximum allowed writers

Mount Path Properties (Allocation Policy tab):
- Maximum allowed writers

I noticed that incrementals run upto 50-60GB/hr but the fulls only tranfer at rate of 35-37GB/hr (hence the slowdown)

scratching my head....
 
I would also hazard a guess that the sheer number of files you are shifting here is not helping.

33 hours for 1.2TB isn't brilliant but when you take into account the 2.5 million files you have on your SAN, 36GB an hour doesn't sound overly bad either.

Other contributing factors to consider are, is there any other network traffic your SAN is dealing with, whilst also doing its backup (especially during the full backup window which you say is slower) and also the speed of its network connection.
 
thanks markey164,
there is a large number of files being transfered.

No other backup runs when the weekly full backup is being run on friday nights.

The client server is in the same rack as the Commserve and MediaAgent. However the Client server is a 2 node MS Cluster. And since the Data Interface Pair uses the Cluster's Virtual server name, it backups up using the Public network as apposed to the Private network that is also setup between the servers in the rack.

I suppose if I can figure out a way of using the private network to do this backup then I might see some speed increases. the private network is also 1GB.

regards




 
Hi,
Do you think splitting the subclient into two might help.

I'm think that if I have two subclients backing up this 1.2TB then it might speed things up a bit as the no. of files backed up by one subclient would be lesser (1 million).
Does that make sense ?
 
I wouldn't expect this to make any difference, because wherever the problem or bottleneck is, isn't being removed by splitting the backup into two jobs.

If the bottleneck were the number of files, and you split that backup into two jobs, ultimately you are still backing up the same 2.5 million files and the same 1.2TB of data between the same source and destination over the same paths. Therefore it should take the same time. If anything because you will have created TWO separate jobs, i would actually expect the overall backup time to be slightly longer.
 
ok there goes that theory...

I thought the since incremental backup rate is much faster than the full backup rate. the configuration of the subclient is correct, the only difference being that in full backup its backing up a larger number of files.

 
I'd take a guess its probibly sata disk, the act of reading and writing to the same magnetic library will be killing it.

Adding more readers and writes will probably make performance worse.

Reason incrementals are quicker is you are only writing no disk not read/write.


Well planned raid groups and multiple mag libs spread across them will increase performance. Others tips are format using 64k and make sure the partitions are aligned.
 
thanks theravager,

The subclient is backing up from a SCSI disk array.

The setup is a HP MSA1500 controller with MSA30 SCSI disk library. Setup as raid5 with 16K stripe size.

 
i would go with that's probibly not to bad a speed then. Try experimenting with the amount of readers and writers allowed to the disk until you get a nice performance balance.

While raid 5 is nice for backups in general, the way commvault writes data fragments it really bad and the disk seek times blow out fairly badly. Try using spill and fill as opposed to fill and spill if there is more then one mount path to the maglibs as well.
 
i resolve the problem of slow backup jobs with a simple solution. I put all my servers in a giga swith an server by server connect the two networks interfaces with team.

Without this the speed was 30-40 Gb/h
With this solutin 120-180 Gb/h by server
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top