Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Very Large FileServer Backups 2

Status
Not open for further replies.

smithrob

MIS
May 17, 2007
18
US
Hello all...

I have several extremely large fileservers that require backup. A typical example is below

c: 10G
d: 300G
e: 600G
f: 400G

So far, I am backing up each filesystem as its own saveset. But would like to break it down even more, to get more streams going. Problem so far is that the root level folders are not really distributed evenly.

D:\Program files - contains not much of anything, and there are several other folders at the root level that are very similar. No need to set up separate sets for them for performance reasons.

D:\Shared has a hundred or so folders under it, and many of them are very large. It would be nice to set up a few extra sets of them, but if I were to do so, is there a way i could run all the other folders under D:\shared without having to specify every one of them as its own save set?

I have experimented a bit and found that doing:
D:\
D:\Shared\Dept
D:\Shared\Finance

Will result in the D:\ save set backing up everything that the other 2 sets are doing as well.

Is there a way to do this without specifying every folder as its own set?

Thanks so much!
 
If you want separate save sets you must not specify an upper directory.
 
Thats a bummer....theres around 75 directories or so under d:\shared and only maybe 6 or 7 are large enough to warrant their ows save set. If I do that tho, is there any way to get all the others without having to specify them in their own individual sets?
 
Not really - best practice would be to create multiple client instances and use the new client configuration wizard (since NW 7.2) to define the save set lists by point-and-click.
 
Is there any particular drawback to having a Client with 100 or so save sets?
 
No - you just will have more entries in your media index.
 
What about this?
Is tehre a way to backup all of the files in a given directory, excluding any subdirectories from backup?
 
Hi,

What you want to do, or at least what I usually do, is to create different client definitions with a few save sets, ie

def_1
F:\dir1
F:\dir2

def_2
F:\dir3

def_3
F:\dir4\dir1
F:\dir5


I also create a def_4 client with save set F:\. This client should have a directive that skips (use null in this case) all the other directories covered by the other client definitions. In this example:

<< "F:\" >>
+null dir1
+null dir2
+null dir3
+null dir4\dir1
+null dir5

With this setup others can add more directories and they will be picked up by the last client with F:\ as save set. You don't want someone to add a directory and that doesn't get backed up. Depending on what your requirements are there might be a good option for you to set up different schedules for the different client definitions to spread the full backups on several different days.

For example, def_1 full on monday and inc rest of week, def_2 full on tuesday and inc rest of week etc. This works just fine, but can be a bit confusing if you are going to do save set recovers, but if you're familiar with mminfo, it should be a piece of cake. If you're always using the GUI and browsing when restoring you'll be just fine and don't have to worry at all of the different levels on different days.
 
Besides specifying more saveset, you can try increasing the paralelism in Server and Tapes.

I was doing some tests and I was be able to get better from 30 hours to 10 hours by means of paralelism —that is the reason you should put in more saveset.

Good luck!
 
Thanks so much!
Any advice on increasing the parralellism of the tape drives? How much is too much? The built in default is 4, the max is 512....thats a lot of room in between. Anyone have experience in finding a comfortable middle ground?
 
That really depends on your environment. Basically, when backing up to tape, you could say that the higher target session/parallellism on your tape drives the better backup performance and the worse restore perfomrmance.

Todays very fast tape drives probably needs more than 4 concurrent backups to stream, but that of course depends on a variety of factors. You should definetely be a bit careful and not us a to high parallelism/target session on your devices. I don't usually have set that higher than 8. Usually you set the server parallelism to the sum of all the target sessions. Now remeber that is only some kind of guidelines and isn't the best solution for every backup environment.
 
Be a bit careful - as Rif123 states, it is all about the environment or what NW shall do at a time. If one client with one data stream can keep the drive streaming, that is it - you can not go faster. If you have slow clients, then it is probably better to backup multiple streams at the same time just to support your tape drive.

A total different approach is also a valid argument: use automatic staging (via an Advanced File Type Device). As a disk can react faster it must not be kept 'streaming'. So you may want to 'collect' your data to disk before it is ready for a very fast (local) staging process.

Unfortunately, you need a license for the disk device.
 
Is there a good doc out there on setting up a disk->disk->tape solution with NetWorker?
 
There is really nothing special - besides the disk backup option (DBO) license you simply need to setup a staging resource. And those few parameters are obvious.

BTW - 512 is the max. parallelism for a NW server. However, you can only reach it if you also have added the necessary number of storage node licenses (Network & Power Edition only).
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top