Hello everyone, I'm trying to find ideas on how long most people are retaining dr backups. I know every shop is different but I just wanted to see whats out there because it seems like 60 days / 60 cycles is extreme especially if they go offsite every day. Any answer is much appreciated.
Anyone out there have any best practices type ideas on if backing up C:\Documents and Settings\*.dat or C:\Documents and Settings\*.log is necessary? Of course taking good restore capability into account...Also what about the SAV folders (%SystemDrive%:\Program Files\Symantec...
Thanks Birky, the primary target is tape. I guess its just a matter of figuring out if its better to multiplex sql jobs but if we separate them (fs and db) onto different tapes by adjusting schedules and media pools we don't have concurrent writes so they won't land on the same tapes anyways...
Is anyone aware of the best practices where database (sql) storage policies and multiplexing are concerned? I've seen where its recommended to have different storage policies for data types so once we have that, what are the recommended settings for databases and file systems?
We have a inc. sp set up ("master inc pol")and associated it with all SP's as their "Incremental Storage Policy". Looks like associating the aux copy operation (via a schedule policy) with the "master" inc. policy worked to get just incrementals onto one tape and fulls onto another. Thanks for...
Thanks bigg22, we have a sync aux setup. We've scheduled the aux copy via a schedule policy and associated the schedule with each SP's secondary. Problem is, each time the schedule runs, fulls get aux'd onto the same tape as the incrementals. So, is there a way to have the incremental aux copies...
Question, hopefully someone can help: When there is a master aux copy scheduled and its associated with just the primary copy of each SP, will the incrementals (each SP has a "master" incremental policy) be sent to tape too or does the master aux schedule have to be associated with the "master"...
Thanks Bart, it was a little confusing terminology wise but that did the trick. We added a secondary copy to our storage policy then set an aux copy scheduled job that copies the "disk copy" to the library and voila! no client processes running, media agent writes everything to tape.
We need to first backup to a disk target (works fine) then move that data from the disk target onto a tape library automatically, without the client being affected. Anyone know how to do that in CommVault 6.1 with two media agents sharing one disk target and one library?
First intro to CommVault so excuse the question but, whats recommended for creating schedule policies, we have way too many IMO (typical daily incr.,weekend fulls strategy). Is it best to create them based on a per application basis (all sql servers, project XYZ servers, etc) or just create a...
No spyware, we're hoping its not a virus but we'll be scanning the MBR today. All updates are done, nothing more from WindowsUpdate. I'll also be checking the memory today as soon as the save sets complete and I can down the thing.
I'm out of ideas of what else to check, you recommend anything??
This is frustrating beyond believe.
Heres the setup:
Win2k Sp4, no updates available
Legato Networker 7.1.1
Server is rebooting randomly about twice daily with the following error logged by Compaq Notifier:Blue Screen Trap (BugCheck, STOP: 0x000000B8 (0x00000000, 0x00000000, 0x00000000...
This is frustrating beyond believe.
Heres the setup:
Win2k Sp4, no updates available
Legato Networker 7.1.1
Server is rebooting randomly about twice daily with the following error logged by Compaq Notifier:Blue Screen Trap (BugCheck, STOP: 0x000000B8 (0x00000000, 0x00000000, 0x00000000...
Thanks for the info, I found the log. What is the easiest way to find the exact completion report or the amount of time that a particular backup ran? Sorry for the bother but I was hoping there was a command I could use that would pull the amount of time a particular backup needed to complete...
Totally new to Networker. Have been asked to find out how long a certain group runs. One client in group, one specific database in group. I know the notification runs and tells us the current completion times but how can I see how long it took to run last time, like a week ago? Any script I can run?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.