Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Tape Capacity Problem 1

Status
Not open for further replies.

yeazel

Technical User
Sep 9, 2002
30
0
0
US
I have inherited a Legato setup with an L1000 w/ 30 tape slots and each tape has a capacity of 80GB. The configuration has 2 pools in use, Default and oracle. On the tapes in the oracle pool, I'm seeing anywhere from 80-105GB of data written to a full tape. On the Default pool, I am consistently only seeing 32-33GB of data written to those tapes. I have verified that the tapes should be 80GB in capacity. Anyone seen this before or do you have any ideas as to why I'm not getting the full capacity out of those tapes?

Thanks!

Troy
 
Typical for Oracle due to it preallocation diskspace. You'll never get those numbers on regular backups only Oracle backups. Is 80 gigs native if so you've got problems if its compressed you may not have squishable data. Check your density setting on your tape devices. I use the compressed density. I have SDLT drive native 100gigs Oracle backups run around 280-459 gigs per tape. Normal backup run 102 to 216 gigs per tape.

joe
 

If you have an L1000 it most likely has DLT 7000 drives. With these drives you will get from 35-70 GB of data. The higher number is with compression assuming that you will get good compresion. DB's can give very high numbers when writing to tape, becasue the database may not be full. The numbers you are getting are not unrealistic.

Although the tapes you have purchased say they will do 80GB's (DLT IV tapes) that is IF you are using DLT8000's AND you get good compression. The tape device and data really determine how much gets written on tape.

John7
 
The drives are DLT 7000 drives, so it looks like the uncompressed capacity should be 35GB. I verified that the client backing up to the Default pool are using the UNIX w/ Compression directive and the '<< / >>' directive does contain the '+compressasm: .' setting. All oracle datafiles are skipped, so the pool is basically just backing up standard UNIX system files. Given that, I'm still confused as to why I'm not seeing more than 32-33GB written to the tapes in this pool.
 
We have a similar problem with DLT7000. With Compression on we should be abble to manage well in excess of 32Gb. However when backing up standard files we only manage around 32Gb. However when backing up an SQL database we do manage in the region of 70-80Gb of data. Indeed db's do compress a lot more than standard files - Eg. Zip, jpg, etc. However I feal that legato is not making use of this compression correctlty. I can prove this by using ntbackup on the same set of files and this time I get 70Gb of data on the tape as opposed to arround 32Gb.

In order to try and get to the bottom of this compression problem I have tried using different SCSI adapetors, changed the block sizes, ensured that all compress directives and envrionment variables are set, changed DLT drivers in W2K and even tried a different OS (Linux instead of W2K). All give exactly the same results.

The last thing I tried was to use AIT-2 tapes. These should give 50Gb native and 100Gb compressed. When using Legato I only get about 48Gb max with compression set. When using ntbackup with AIT-2 I get around 85Gb. Clearly there is an issue here and it does not appear to be OS, Hardware or version related. The versions of Legato that we have tried are Network V6.1.1, V6.1.2 and V6.2

Has anybody out there had this problem?

PS We have reported this to Legato but after a few weeks of correspondence and trying different, they have now gone very silent.
 
Are you compressing at the client?

If the client compresses the files (very good, as that means less network traffic), then the tape drive will not find more to compress.

if the client sends the data uncompessed, then the amount of compression will be determined by the patterns in the data, Oracle data has lots of patterns in it, executables will have no patterns in them. I tried to remain child-like, all I acheived was childish.
 
If we select compression or no compression at the client end it makes absolutly no difference either. We still get 32Gb of data with compression on or off.

I have been through all the usual options with Legato and they are a bit mistified about the cause.

I am also not the only one with this problem - in our company Legato is used in many other countries - they all suffer the same issues - Despite having different OS, Hardware and tape technologies?

The problem we have at the moment is that we are trying to migrate from Backup Exec to Legato - but becasue Legato is not using compression, our tape Library is not big enough to backup all our data. As a result we are still stuck using our Tape Library with Backup Exec until this issue is reolved

Any compments would be appreciated.
 
If the compression occurs on the client side, does the Networker server record the amount of compressed data written to tape or just the amount of data it receives and pushes to tape?

If it just records the actual amount of data received and pushed to tape, this behavior I'm seeing would make some sense if the oracle data isn't being compressed until it gets to the tape library. The file system data is already supposed to be compressed when it arrives. I just need to check to see what is happening to the oracle data before RMAN sends it to the Legato server..
 
We use strictly hardware compression at the tape drive on our DLT7000's. If you use Legato compression it makes your clients use resources before sending the data. Make sure you don't use Legato compression AND have compression set on the tape drive. I've been told by serveral software engineers that &quot;double&quot; compression is counter-effective, causing even less data per tape.
 
Yeasel the server only sees the compressed data in that case and records the precomressed data amounts.

AI3 if you are using unix, be sure you are using the correct tape device

/dev/rmt/0ubn is working for us I tried to remain child-like, all I acheived was childish.
 
Darn, for got to mention that /dev/rmt/0ubn is in Solaris I tried to remain child-like, all I acheived was childish.
 


I always use /dev/rmt/0cbn on SOlaris for compression.
 
Hi

Thanks for all your coments about compression. I am aware about all of these. However whatever I use, whether it being sofware compression, or hardware compression, a combination of both, or non, I cannot get more than 30Gb of data on the DLT7000 tape (the compression light is also lit on the DLT7000 drive). However if I use the SQL module to backup an SQL database I do manage the full amount of data (on some occasions 80Gb).

I have also investigated the possibility that legato was reporting the amount of data as precompressed data. So even though Legato reports a tape as full at 30Gb it may actual contain around 50 to 6Gb of data. When I did an actual measurements (backed up 30Gb of data exactly) I found that Legato was physicaly managing only 30Gb of data per tape (with compression set). So its actual reporting was indeed correct allthough it was not compressing the data.

Even if Legato was reporting the data as pre compressed data it does not explain why comnpression works OK with the SQL module.


 
I did verify that by turning software compression off, I am now seeing more than 35GB of data being written to the tape. Since the data was being compressed on the client-side, it was only showing what the Legato server was sending to tape, not what the Legato client was sending..

Thanks for all the feedback. My problem has been solved!
 
Hi yeazel
How do you solved your problam ?i have the same problem yet!
shada_saghafi@yahoo.com
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top