Might be nothing, but check the attributes of the files in question. See if they are read-only, or have some kind of restrictive permissions on them.
I had a similar issue when i was doing some archive testing, and Commvault wasn't archiving random files that it was told to archive. We found...
Hi guys,
I'm being asked for the oldest copy that we have of a particular folder, but it's not immediately easy to see how I can find which oldest copy is still on tape.
Is there a way of choosing a folder, and listing all copies of it that are still recoverable, and havn't been aged?
Sorry...
...NDMP sessions every couple of days pretty much.
However in comparison, backing up our servers is almost faultless. We backup about 100 Windows and *nix boxes and these just work.
In general my experience of managing NDMP backups and its associated issues is not great.
Am i alone, or are...
This is probably an easy one, but I can't see how to do this.
I had a problem over the weekend where 70 tapes were moved to the retired media pool. I need to bring these back into the default scratch pool, so selected all the tapes and deleted all 70 in one hit, so as to rediscover them as...
Just to add, we also have Simpana 7.0, and are happily backing up about a dozen windows Server 2008 boxes, both 32 and 64 bit. No known problems so far.
...scratch pool (which is what i'm doing) isn't ideal, but i'm not going to throw away what i'm fairly sure are perfectly working tapes.
How can you *really* tell when a tape is deprecated?
No one wants to take chances with their data, how do others deal with this? I assume other people see...
...on this topic, so i decided to do some testing on this, and think i've figured this out.
It appears that Commvault incrementals DO acknowledge *deleted* files as well as new and changed files.
I tested this at a simple level with a folder with 2 files in it, and did the following
* Ran a...
Looking at the commvault documentation, below:
http://documentation.commvault.com/commvault/release_7_0_0/books_online_1/english_us/features/backup/syn_full.htm
...the only bit which comes close to the issue of file discrepancies, is the 'verify synthetic full backup' option which states:
"In...
Another question on this, let's assume i go to a regime of synthetic fulls and incrementals, and do away with 'real' fulls completely.
Now let's assume that over the course of the next few months, large amounts of data are cleared out and deleted.
Normally a full backup would take account of...
...16million images. I wonder if the difference here is something to do with it being NDMP in our case, rather than a regular server backup?
@Psy053
* No i havn't run any performance testing. What would you advise in this scenario with a Netapp Filer?
* No AV software involved. This scenario...
> Actually, you can hear that by the drive noise
Not possible to hear in our setup. We have 4 drives all physically installed next to each other, in a large rack height Qualstar Tape Library, which itself is in a noisy data centre.
We do have good support from Qualstar, so i will see what info...
...of all the details on this side of things, other than it was a causing performance problems that were apparently not resolvable except to move it *off* Unix.
Also note that the OS on our Netapp Filer is Netapp's own 'Ontap' OS, now Windows ;o), so Windows isn't involved in this particular...
605, when you say 'fast drive' are you referring to the fact we are using LTO4? and why would using faster drives make the situation and backup time worse?
What do you mean by streaming mode? I've looked at books on line, and the knowledgebase, but i can't find any reference to this term?
When...
...longer since we split it into separate subclients per folder.
We are looking at implementing synthetic fulls, however whilst this will solve the *full* backups, it seems likely the incrementals are still going to go into a 2 day window because of the sheer number of files they still have to...
I wouldn't expect this to make any difference, because wherever the problem or bottleneck is, isn't being removed by splitting the backup into two jobs.
If the bottleneck were the number of files, and you split that backup into two jobs, ultimately you are still backing up the same 2.5 million...
I would also hazard a guess that the sheer number of files you are shifting here is not helping.
33 hours for 1.2TB isn't brilliant but when you take into account the 2.5 million files you have on your SAN, 36GB an hour doesn't sound overly bad either.
Other contributing factors to consider...
I'm noticing a lot of warnings in my event viewer (error code 68:49) for
"Total clients selected to update in this attempt [1], skipped [1], will be attempted [0]"
If i look at my client list (which i havn't done for a while), i notice that only 5 clients are showing as 'up to date' and the...
Here's another question on this.
The documentation says that it combines the previous full and incremental backups into a single archive for efficiency and faster restores...nice.
But what happens if i need to recover a single file from that full backup? Does it have to recover the whole...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.