Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Best pratice on offsite backups 3

Status
Not open for further replies.

chuckster43

Technical User
Jul 12, 2003
51
US
My client is installing a Legato NetWorker 6.1.3 using a Sun L1000 tape library with 30 slots. My question is what is the best practice on conducting full weekly backups with daily incrementals and having the full backup copied to a different tape for offsite storage. Is cloning the answer?

Thanks,
 
Yes it could be an answer but you'll have to verify the cloning operation duration.

Cloning time is longer than backup time because Networker cannot multiplex during the cloning operation.

If the data backed up "allows it" (DB exports for example)it would go faster to run a second backup with another destination pool, the second interest with it is that you'll be able to use a different retention policie.

Cloning, by default, runs immediately after backup so you're not able to decide when this operations starts via the GUI (It Could be a problem when you have many groups).
If you want to run it when you want you'll have to script it (not so hard !).
You'll also need two devices (but not necessarly of the same type).

With a second backup you'll be able to decide when to launch it via GUI and you don't need another device.

Hope this helps.

 
Hi sdeb,

Thanks for the input. I had read that cloning was very slow, and the L1000 has only one device (although it is capable of having another device). So, what you are saying it is better to immediately make another full backup to a different group/volume/pool so that those sets of tapes can be removed for offsite storage? I hope I've got that right.

Thanks.
 
chuckster43;

The word "cloning" doesn't sit very well with backing up a mission critical data. So part of your decision on whether or not to clone should be based on how critical is the data being backed up.
Before juping into your pilot seat and taking off with cloning, it might be a good idea to consider the following:
What if you choose to clone and the original backup fail due to one thing or another? that means you will end up having two bad backup copies.
Whereas if you had two or more sets of backups running (in parallel or one after the other), you will have more chances of getting a good bakup.

Sleep on that; and let us know what you decided on doing.


^^^^^^^^^^^^^^^^^^^^^^^^^^
Experience is the Best Teacher
But its' cost is Heavy!
^^^^^^^^^^^^^^^^^^^^^^^^^^
 
Sorry sdeb, but your statement is simply wrong.
Unfortunately, you are not alone.

NetWorker in fact can clone as fast as he has backed up.
As long ass you can keep a tape drive streaming, that's it.

NetWorker in fact will most likely try to create a
multiplexed clone media. However, "tape drive contention"
will usually not allow to clone more than one save set at a
time. please have a look at the document tid_0303.pdf from Legato's PartnerNet page (will help you to understand the behaviour.

Also keep in mind that a second backup must not necessarily
result in the same save set.
 
I 'm nearly sure that in 99% of the cases Networker will take a longer time to clone (read) than to backup (write).

I know it is possible to optimize it (for example by giving different savesets lists from different media with nsrcolne / not using multiplexing at the backup time), but "most of the time " cloned savesets reside on the same tape so Networker won't be able to read two different savesets from the same tape (if it was able it would take a huge amount of time).

I know that a second backup will not necessarly result in the same saveset that's why I wrote "If your data allows it (database exports...)".

I think I've the ground experience and telling a customer that cloning is as fast as baking up "would be a wrong statement" (I spent long nights trying to optimize it...).

We both agree that in some situations it could be true, but not in most of the situations.

I don't want to say that it's a Networker or a drive problem ... I don't care (it's true that most of the time it's a problem with tape buffers). I'm also using VERTIAS Netbackup and it has the same problem.

I'll take a look at the PDF but could you give me the URL.
I've a partnernet access but don't know where to look (WHITE PAPERS ?).

 
Well, you can test this easily. I agree, there are a lot of
things that needs to be taken under consideration - usually
you do not optimize your system for cloning.

1. NW can of course read as fast as it had backed up.
Or in other words: If you can keep the drive streaming
during a backup, why shouldn't it be possible to keep it
streaming during a read (clone) process. You just need
to ensure that you can shuffle the data away fast enough.

2. Consequently, you just need to make sure that you "need"
all the data. For example, if you read a multiplexed
tape, you will most unlikely recover ALL multiplexed save
sets at the SAME time. This already decreases the
throughput. Same is true for cloning of course.

3. You can definitively clone a multiplexed tape in a single
pass. But you need to do it in a clever way. As i said,
NW intends to support your effort but again this depends
on a lot of other things like drive contention.
The easiest way to achieve this is by starting a clone
process manually by using the command line:

nsrclone -S ssid_1 ssid_2 ... ssid_n

Also, please make sure that you tape is not positioned
beyond the beginning of one of these save sets.

For more details, please look at the doc i mentioned.
Unfortunately, since "rebranded" to SalesNet, you can not browse through Legato's PartnerNet page for these docs any more. But they are there.

Simply uise the search tool (upper right corner) and look
for "tid". Then open issue 0303. BTW - another good resource
are the "Technical Info Newsletters" (tin), but this info
is more briefly.

Let me know what you think about it.
 
605 & sdeb,

That's a lot of good info to parse through. I still think for my client that cloning would not be an option as the L1000 only has 1 device in it and they want to keep the costs on the project down to a minimum. Therefore, I am going with conducting a separate backup of the weekly full backup utilizing different group and volume labeling to accompany the offsite storage that the client wants. BTW, does anyone have a script that would kick off the second set of backups after the original is completed?

Thanks,
 
Hi chuckster43;

The same script that you use to kick-off the first set of backup can still be used to take care of the 2nd. All you need to do is
1) create backup pools for local & offsite
2) make a copy of the original backup script and modify the pool name accordingly (one for local & the other for offsite)
3) insert a "test condition" in which ever script you choose to run first that will call the 2nd script upon completion. (a wait period may also be inserted if you so desire)
If you dont have any backup script at-all and will like me to email a copy of mine, just provide your email.

goodluck.

^^^^^^^^^^^^^^^^^^^^^^^^^^
Experience is the Best Teacher
But its' cost is Heavy!
^^^^^^^^^^^^^^^^^^^^^^^^^^
 
Hi bkonline,

Thanks, that would be a big help to me at this point. My email is as follows: chuck.marchman@ecommsecurity.com

Again, many thanks for the assistance from everyone.

Thanks,
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top