Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Backup Solution 2

Status
Not open for further replies.

robertedmed

Technical User
May 23, 2005
59
US
Have outgrown our tape backup sys,been looking at The Iomega
REV 32GIG Systems. anyone have experience with Iomega??

THANKS robert
 
robertedmed,

This IOMEGA drive use expensive proprietary media, that will hold 90GB compressed. Typically, a 4 pk is about $200 USD & the drive about $300-400 USD.

They are reliable and support is fair.

I don't know what backup scheme you are using?? Daily incremental with weekly full or?? As you have been using tape and they are relatively slow I would expect that you have been using some type of incremental.

As an option, have you considered using Hard drives? With the current cost this is becoming a completely viable alternative.

We have converted, & do the backup(full) daily and hold the backup in another area away from the servers. This is good practice as if there is a catastrophe in the server area & the backups are there also you are wiped out.

We use drive enclosures installed in a PC & can remove them via caddy after the backup is complete. Using HD's over Gb ethernet the backup is fast. As the HD's are only in service for a short period daily, they should last well past any expected lifetime. Just something that you might consider.

rvnguy
"I know everything..I just can't remember it all
 
or else get a RAID system, and you only have to back up occasionally. With 3 drives, any one can fail, yet the data is still available.

David
 
Iomega has always worked pretty well, and almost always been substantially more expensive than the alternatives. I now have three internal harddrives with one used to hold a ghost image of C:. It is large enough that it also hosts other activities. I backup my data to an external hard drive.
 
With a RAID system you still have to back up regularly. If you have a RAID array and your building burns down, it doesn't matter how many disks were in the array.

RAID is for fault tolerance. Backups are for disaster recovery. Anyone who uses one solution for both is looking for big problems down the road.
 
A previous Admin had purchased and setup a 32GB REV drive for backup on our Win2000 server. The drive has since failed, and is no longer under warranty.
Cost and Data growth have prompted us to purchase 2 500GB USB HDD for backup. We've been trying to use the existing Iomega software for backups to the USB drives. When connected to the server, each ISB drive is configured for the same local drive letter (G:). The Iomega software is configured to backup each night to G:, and we rotate the 2 drives nightly. Seemed like a good enough plan at the time...

The backup job fails to launch whenever the 2nd drive is connected. Although it is assigned the same drive letter (G:), Iomega sees it as new media and waits for human interaction. Has anyone else seen this? Is there a way around this? Do I need to remove the Iomega software, and go to something different?

Thanks for the help.
 
Definetly something to not be forgotten kmcferrin....star for you

Enkrypted
A+
 
I use a DLT changer... it has 8 slots, 7 for tapes and one for a cleaning tape with Veritas. Works great. I change tapes once a week, and store the latest set offsite for disaster recovery.



Just my 2¢

"In order to start solving a problem, one must first identify its owner." --Me
--Greg
 
DLT or LTO

I would do some fact finding and figure out how big your data is going to be in 10 years.. and get a solution that will fit on it (compressed)
 
10 years out? That seems a little excessive. Storage densities are increasing at a previous unheard of rate. What is state of the art storage today won't even be in use 10 years from now. Back in 1996 we were using a low capacity DAT tape to store our enterprise backups. It might have been 10 GB at the time. Today I'm backing up about 800 GB of data nightly onto several LTO 100/200 GB drives. In 2-3 years I'll be backing up a couple of terabytes at a time on either LTO3 tapes or using some new storage technology that's even bigger. Ten years from now we'll probably need backup capacities nearing petabyte sizes, and we'll be using something that probably hasn't even been thought of yet.
 
...and we'll be using something that probably hasn't even been thought of yet.

... like Data Storage Crystals in Trek/Babylon 5

:)

Just watch out for fingerprints... hehe

Just my 2¢

"In order to start solving a problem, one must first identify its owner." --Me
--Greg
 
I have answered questions like this so often (though not in this forum) that I've but together a web page discussing what you should use depending on the requirements of your business - no two or three paragraph answer is going to be correct for everyone. Read this and if you have additional questions, I'm sure I or someone else can answer them.


In short, I GENERALLY recommend Disk based solutions for most people who need to backup under 500 GB regularly. Tape for those who need to backup more. And Tape for those who need to archive backups. But you really should read through the link.

Quick note - There is almost NO CIRCUMSTANCE where I would recommend Rev Drives - my reasoning is found in the link above. (in short - they are expensive per GB and proprietary, which is never good in a backup system).
 
lwcomputing,

I've been reading your article, but I believe that there is a mistake relating to RAID 10. You describe it as a pair of RAID 0 arrays that are mirrored, when in actuality it is a RAID 0 that is striped across a series of mirrors. For example, say that you have 8 hard disks. Stripes are designated S, Mirrors are designated M. In your example, RAID 10 would look like:


M1 M2
S1 S1
S2 S2
S3 S3
S4 S4

But RAID 10 actually looks like:

S1 S2 S3 S4
M1 M1 M1 M1
M2 M2 M2 M2

There are a couple of reasons for this. Firstly, someone who would choose RAID 10 over RAID 5 would do so primarily for increased read and write performance. In the event of a drive failure under your model, you lose functionality in one stripe set, which means that half of your disks are now totally useless. You are now reading and writing at 50% of your previous rate. Under the actual RAID 10 model, if there is a single disk failure then you only lose the functionality from that single disk, and in our example the read/write throughput is only diminished around 12.5% (1/8th). The larger your RAID 10 array, the smaller the performance degradation of a failed drive.

The second reason why someone wouldn't use a mirror of stripe sets instead of striping across mirrors is the potential for catastropic failure. In either method the system could theoretically lose up to half of their disks and still not suffer catastrophic data loss. However, the chances are far greater of catastrophic loss in your model versus mine. Again, using the example above:

Lets say that with your model you lose disk M1S1. You still have a functional array with the remaining disks, and as long as any subsequent failures occur on the M1 side you are safe. However, a single failure on the M2 side will kill your array. You have a 4 in 7 (roughly 57%) chance that a subsequent failure will be on the M2 side and therefore result in total data loss.

With the actual RAID 10 model, lets say that you lose disk S1M1. Again, you still have a functional array with the remaining disks, but in this case as long as any subsequent failures do not occur on the S1M2 you are safe. Only a failure on S1M2 will result in data loss. In this case you have a 1 in 7 (roughly 14%) chance that a subsequent failure will result in total data loss.

So not only is the actual RAID 10 model more fault tolerant than what you have described as RAID 10, it also provides greater performance in failed states than your model. And the best thing about RAID 10 is that it scales very well with more disks. In a rack a standard 4U shelf in a drive array will have room for 15 disks. In a RAID 10 that gives you 7 pairs of disks plus a hot spare. Even without the hot spare, you would only have a 1 in 13 (roughly 7.5%) chance that a subsequent failure would result in data loss. The percentage gets much smaller the more disks you add. And while the percentage also gets lower under your model as you add more disks, the percentage will always remain higher than 50%.
 
I see your point, I think. I'll be working on it tonight and make the modification. Credit to you, kmcferrin.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top