Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Relatively slow Magnetic Library backups 3

Status
Not open for further replies.

fabwhack

IS-IT--Management
Jan 20, 2008
21
GB
We have our iDataAgents backing up to a Magnetic Library on a Win 2003 server (Galaxy 6.1 SP4). One of the iDA's is a Windows 2003 server with several terabyte of stuff to be backed up.

From this, or any other iDA, we never get more than 35Gb/hour throughput. This isn't an issue for the other clients, as the volume of data to be backed up is relatively small, but for the big server we need a bit more throughput than that.

There's gigabyte links between the MA and the iDAs, and doing a simple Windows file copy between the MA and "big" iDA - even when there's a CommVault backup running - rattles along at 200Gb+ an hour, which is much more in line with what I'd expect.

I can't see anything obviously wrong in the logs. Adjusting the number of readers doesn't change anything, and there's no compression on the policy (the stuff to be backed up is already compressed).

Should I be content with 30Gb/hour, or does it sound like there's something wrong here? Is there any troubleshooting I can do to figure out why iDA->MA performance is so relatively poor?
 
Have you tried playing with the streams? breaking up the content into different subclients?

I run my exchange DB's (900gb) in 4 concurrent streams and it finishes in about 2.5 hours I am getting about 45-55gb/hour out of each stream.

Is your gb link dedicated to backups?
any compression or encryption going on?

35gb/hour is a bit slow
My citrix NDMP to tape (LTO2) sometimes spikes to 70gb/hour

There is something else going on.....

 
35GB/hour is a typical 100Mb Ethernet throughput figure, especially when you mention you never get a higher performance than this. Could it be that there are more NIC's involved here ? Normally CommVault takes the first NIC in the binding order unless you specify otherwise (Data Interface Pairs)

regards
 
Fragmentation has a pretty massive effect of this if you have a windows through the week to do defrags it should help out.

Some other factors are what type of storage is it, how many streams running at once, is anything else accessing the underlying disk.

Changing the allocation unit on the volumes to 64k will also improve things.
 
Thanks for all the replies. It's helped me focus on the cause of the problem, and to a large degree I think the problem is due to our lack of multi-stream backups. When I did the file copy test, I wasn't comparing "apples with apples" - the file copy was of a single large file, whereas the "real world" backup is of millions of smaller files, which of course are much slower.

I'd never really explored the whole multi-stream idea, mostly due to my lack of knowledge about CommVault. However, as part of this troubleshooting exercise, I've went through the system and configured everything that I think I need to get more than one stream per backup. Here's the checklist:

* We have a "Advanced File System iDA Options" license installed

* The default subclient properties for the 'big server' has "Number of Data Readers" set to 3, and "allow multiple data readers within a drive or mount point" is checked.

* The storage policy has "Max No. of Streams" set to 4

However, if I start a backup and look at the job details, there's only one stream running. At the bottom of the detail window, it tells me "maximum number of Reader: 0" and "Number of Readers in use: 0". Despite the 0's, the job is rumbling along at ~35Gb/hr.

Any idea as to what I need to do to get a couple of more streams running?


 
Do you have multiplexing enabled on the SP?

You might want to break out your content into multiple subclients in order to get more concurrent throughput.

if you break out directories a-n into 1 subclient and then add a subclient for directories o-z

You might get 2 jobs running at 35gb/hour to your maglib

Just a thought

Thanks,

Frank
 
[blush] Don't know what I was thinking of - changing the SP settings regarding streams wouldn't affect currently running jobs. Starting new jobs gives me a lovely multi-stream backup :)

Thank you all for all your help; I'm going to split the server into subclients, which is a great idea on many levels. Very happy I've found this forum, perfect for this kind of question - not a technical fault as such, more of a setup query which our reseller/support partner tends to view more as consultancy [neutral]
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top