Might be nothing, but check the attributes of the files in question. See if they are read-only, or have some kind of restrictive permissions on them.
I had a similar issue when i was doing some archive testing, and Commvault wasn't archiving random files that it was told to archive. We found...
Hi guys,
I'm being asked for the oldest copy that we have of a particular folder, but it's not immediately easy to see how I can find which oldest copy is still on tape.
Is there a way of choosing a folder, and listing all copies of it that are still recoverable, and havn't been aged?
Sorry...
Hi all,
I'm curious as to your personal experiences, and general opinion of NDMP with regard to it's performance and reliability etc.
I have an NDMP Filer (Netapp) here, connected directly to my MA via fibre and experience all sorts of problems. Backup is about 12TB.
Whilst it essentially...
This is probably an easy one, but I can't see how to do this.
I had a problem over the weekend where 70 tapes were moved to the retired media pool. I need to bring these back into the default scratch pool, so selected all the tapes and deleted all 70 in one hit, so as to rediscover them as...
Just to add, we also have Simpana 7.0, and are happily backing up about a dozen windows Server 2008 boxes, both 32 and 64 bit. No known problems so far.
Hi guys,
I don't know about you guys, but fairly regularly, i find tapes ending up in the retired media pool, when i'm fairly confident there is nothing really wrong with them. Reason given is usually excessive read write errors.
I've got two libraries, with about 150 LTO4 tapes in each and...
I got a rather non sensical answer back from our consulants on this topic, so i decided to do some testing on this, and think i've figured this out.
It appears that Commvault incrementals DO acknowledge *deleted* files as well as new and changed files.
I tested this at a simple level with a...
Looking at the commvault documentation, below:
http://documentation.commvault.com/commvault/release_7_0_0/books_online_1/english_us/features/backup/syn_full.htm
...the only bit which comes close to the issue of file discrepancies, is the 'verify synthetic full backup' option which states:
"In...
Another question on this, let's assume i go to a regime of synthetic fulls and incrementals, and do away with 'real' fulls completely.
Now let's assume that over the course of the next few months, large amounts of data are cleared out and deleted.
Normally a full backup would take account of...
Sorry for the delayed response guys, been on holidays :o)
@ Cabraun - Yes we have considered synthetic fulls. However, the scan phase itself seems to be 50% of the problem. Even if we move to synthetic fulls, it will still have to scan 30 million files regardless of whether the job is an...
> Actually, you can hear that by the drive noise
Not possible to hear in our setup. We have 4 drives all physically installed next to each other, in a large rack height Qualstar Tape Library, which itself is in a noisy data centre.
We do have good support from Qualstar, so i will see what info...
Dear Craig,
That's some very interesting information there. I've never heard of the 'shoe-shining' effect, but i will definitely be investigating more into this.
Although i've found a relevant article on the HP forums that states (not necessarily factually) that shoe-shining shouldn't affect...
605, when you say 'fast drive' are you referring to the fact we are using LTO4? and why would using faster drives make the situation and backup time worse?
What do you mean by streaming mode? I've looked at books on line, and the knowledgebase, but i can't find any reference to this term?
When...
Hi All,
Looking for your thoughts on this scenario.
At our site, we have a volume containing ~20,000 user home folders totalling ~30 million files. The volume is on a Netapp Filer, which backs up via NDMP to a Tape Library with 4 LTO4 tape drives.
All other volumes on the NAS backup fine (and...
I wouldn't expect this to make any difference, because wherever the problem or bottleneck is, isn't being removed by splitting the backup into two jobs.
If the bottleneck were the number of files, and you split that backup into two jobs, ultimately you are still backing up the same 2.5 million...
I would also hazard a guess that the sheer number of files you are shifting here is not helping.
33 hours for 1.2TB isn't brilliant but when you take into account the 2.5 million files you have on your SAN, 36GB an hour doesn't sound overly bad either.
Other contributing factors to consider...
I'm noticing a lot of warnings in my event viewer (error code 68:49) for
"Total clients selected to update in this attempt [1], skipped [1], will be attempted [0]"
If i look at my client list (which i havn't done for a while), i notice that only 5 clients are showing as 'up to date' and the...
Here's another question on this.
The documentation says that it combines the previous full and incremental backups into a single archive for efficiency and faster restores...nice.
But what happens if i need to recover a single file from that full backup? Does it have to recover the whole...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.