Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Tape Library Advice 1

Status
Not open for further replies.

rrkano

IS-IT--Management
Jul 26, 2004
42
US
We're finally going to replace an old Compaq Tape Library that we have, so I'm looking for some advice on what to buy. Our vendor is kind of pushing the Overland LoaderXpress, 1 LTO-2 drive (LVD), 11 slots....and I have a friend who swears by Exabyte's VXA-320 PacketLoader.

Any recommendations? Is Firewire better than SCSI?

By the way, we use BrightStor and we have about 7 or 8 servers to backup. Currently, incrementals take about 8 hours and full backups take 2-3 days. So..speed is essential. Thanks for any help.
 
In terms of ARCserve, I have never had many problems with Overland, I cannot say the same for Exabyte (both in terms of hardware reliability, firmware, and software compatibility).

I would stick with SCSI as support for connections other than SCSI is still fairly limited.
 
Overland, ADIC/HP/Dell, Quantum, IBM all good choices.

SCSI not firewire
 
rrkano,

I purchased an Exabyte VXA-320 packetloader about four months ago and simply love it. I use ARCserve v9.1 and have had no issues with the drive or compatability.

-Jeff
 
I have used two similar Overland libraries and they have been highly reliable and well supported. If you need to increase your backup speed dramatically, you should look at LTO-3. The drives are not much more expensive and the data rate onto tape is quite a bit higher, which will cut your backup time unless there is a bottleneck elsewhere.

Before buying anything, check the CA compatibility list for the library, drives and firmware.
 
Thanks for all the responses. I appreciate it! BackupFanatic touched on a follow up question that I had. Currently we use DLT, so this begs the question should I continue on with DLT or move to LTO...I guess it just means investing a little more money buying LTO Tapes...but is the difference that significant and worth it? Thanks again!
 
It really depends who you speak to, although SDLT has now sorted out a great deal of the buggy firmware issues, they still missed the boat for me. LTO is a better technology, but it's not without it's quirks and compatibility issues either, especially in my experience with the faster IBM LTO drives which seem to be particularly choosy about tape brands. As for going for LTO-3 - you might wanna read this doc first before splashing out on something you don't need and can't stream enough to get the benefit out of:


Although it is ADIC biased it does give some food for thought.
 
There is yet another consideration in choosing your tape technology: backward compatibility and data retention. Depending on your retention policy and your current tape technology, you might want to stay with some sort of DLT. I was caught out with the sales pitch that SDLT drives would read older DLT tapes if I needed to restore older data.

It's true in principle, but the tape leaders are different and the second time I tried it, the SDLT drive broke it's leader pickup. After that happened several times, I decided to keep the old DLT drives attached to a second SCSI channel and powered off until I needed to restore old data. After an appropraite time you can then retire the old drive(s) completely.

If you have newer drives (say 1st gen SDLT) then going to SDLT 320 or 600 makes the most sense, along with retiring the old drives as you can easily read the old tapes.

If you have really old drives, I would recommend keeping them and going LTO-x. You could also make a project of transfering older tapes tothe new technology, but that can be very expensive and time consuming.

I have just moved to a new job and am facing the upgrade of a Storagetek L80 library with two SDLT320 drives. The business is plugging for two LTO drives to increase speed and capacity but they have not thought about compatibility. Since the library can take up to eight drives,I like the option of upgrading to either lots more SDLT 320 or fewer SDLT600 drives. That removes the need to migrate old tapes or keep old technology and it will probably be about the same speed over a large job.

That's a very interesting point about LTO-3 drives being overkill. Do you think it holds true if the server uses BAB 11.5 with staging and multiple SCSI channels?
 
Wow...you've all given me plenty to think about. Right now we probably back up 150-200GB of data (most of which is our Exchange Server), but I also want to plan my tape library purchase to accomodate an increase in data to back up. So I'll really have to take a close look at this.

I don't really have a need to read the old tapes, but like you say, I'll keep the old tape library and tapes around just in case and retire it slowly.

The server that ArcServe runs on is not incredibly fast by today's standards..it's probably got a processor that's 2.0ghz and has scsi hard drive raid configuration.

In any case, I have a lot of info to digest. Thanks again for all your help!

Ray
 
@rrkano - With Exchange if you do a brick, or even doc level backup then you're unlikely to be able to stream enough data to avoid what is described in the document, but if you multiplex this or use disk staging then you should be ok.

@BackupFanatic - Certainly there were a lot of issues with SDLT drive firmware early on, and IIRC there were a few issues with broken leaders too, which I believe they modified the firmware to at least partly address, or maybe it was an HP/Quantum hardware bulletin I saw it in, I can't remember off the top of my head.

The problem in more recent times is that Quantum did a typically HP thing and re-used firmware revision numbers that had already been used by an OEM, or didn't follow a hex increment that allows any person with a modicum of common sense to realise that 1 firmware is more recent than another which just causes more confusion and is bloody annoying!

Anyway getting way off topic here, with regards to your question as to whether disk staging would run into the problem of not being able to stream an LTO3 drive sufficiently. It's unlikely that this would be an issue providing of course that the SCSI or Fibre channel supplying the drive can support a sustained transfer rate (not just burst) equal or greater than the minimum transfer threshold.

When disk staging triggers what to all intents and purposes is the same concept as a customised tapecopy job, you are doing a direct block by block copy from disk to tape so you don't run into issues of increased FSIO overhead of reading many small files. So all things being equal, and assuming you have a dedicated backup server which isn't a production app, email, or other shared/heavily used server it should be fine.

The one thing to watch out for is filesystem fragmentation. Even though what you're doing when you write to disk is writing a huge file which represents the session, you are still at the mercy of the OS filesystem write daemon.

What I mean by this is that when ARCserve issues a system call to the OS to write a block of data to disk, ARCserve itself has no way of being able to tell the OS that it wants to write the data to contiguous blocks, that really is down to the OS and is not under control of any application, even though it really makes common sense to do this.

I did quite a bit of research into this some time ago, but didn't come up with anything concrete, mainly due to lack of time and inclination to work any further on it for a number of reasons I won't go into here.

What I will say is that I did look into the possibility that fragmentation could be being caused in high load (read this as either High CPU, High Bus/PCI load, or maximised disk bandwidth) situations whereby the system calls on what used to be know as the LazyWriteDaemon in NT.

Historically what used to happen in these situations in NT4 was that when the system was under high stress/load and writes or operations were being stacking up, or being cached or delayed the OS would get a bit 'slap-happy' for want of a better term, and would simply write the pending data at the very first free space it came to regardless of whether that made good filesystem fragmentation sense or not.

Although I did a lot of research into this, trying to find any concrete info on whether the same thing happened in 2000 or 2003 was difficult. What I do know is that it's not a problem unique to ARCserve. In fact a while back there was a huge thread on Veritas' forums about this which their forum mods seem to make great efforts to avoid replying in with any really useful information.

Getting back to the fragmentation issue, if it does become a problem, the 'quick' fix is to drop and recreate the partition (assuming you are using a dedicated partition for disk staging) - defragging doesn't always fix the issue and can take a age to complete - if ever. Whilst it might not show up as a problem in tapecopy or disk staging from disk to tape speed, it may show as a slowdown backing up to disk for no apparant reason - say for example a backup of the same unchanged dataset may take 1Hr 1 week, and 1Hr 30 the next (and it has nothing to do with the ASDB).

In a small or medium sized org you're unlikely to come across the waffle I just went through above, but in a larger organisation where you're pushing the limits of the hardware you may well come across this. There are lots of variables other than just CPU and disk bandwidth, but unless you're really hitting this problem I won't waste any more internet bandwidth explaining it :)
 
I have a dumb question, what exactly is disk staging? Does that mean writing the backup to disk first, then running a tape backup of it? Is there another software product involved? Or does ArcServe do this?

Thanks
 
Nevermind, I just googled Disk Staging and found some info about it on CA's site. I just need to find out if it's available on ArcServe 9.x.

Thanks
 
It does, but you have to use the tapecopy utility for the part where disk data is copied to tape media, which means commamd line scripting.
 
@ VSchumpy: most interesting (again)! I pointed out to the previous backup guy that the staging devices were incredibly fragmented and apparently they never bother to defrag any of their 180 servers. The most fragmented device had over 3,000 segments and I will be manually defragging it ASAP, plus installing an automated tool like Diskeeper.

I have only been at this new job one day and already the to-do list is like a phone book!

@RRKANO: Staging is relatively new: I would try incorporating an upgrade to ver 11.5 into your library purchase. I can hardly recall all the features added since V9, but they are generally worth the trouble of upgrading.
 
@rrkano - Disk staging can be policy based in 11.5, for prior versions it is manual so to speak through a cludgy command line interface. In the previous versions it can be automated to a certain extent, see this doc for some guidance:


Make sure you are patched up to the latest patch level on 9 so you have the latest published bugfixed version of tapecopy.

@BackupFanatic - I am a fan of defragging on a fairly regular basis, as part of weekly houskeeping, just to keep things running optimally. I much prefer O+O Defrag over disk keeper, and whatever I use, I avoid any automated background defragging - I really don't want it eating into my disk channel or CPU bandwidth unless I tell it to :)
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top