Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

why is it only one device is taking backup ? 1

Status
Not open for further replies.

plugin1

Technical User
Sep 22, 2010
12
US
When there is a volume with enough space, why is that it's writing to only one device instead of two device as both are active. Consider this as a novice question.

/dev/rmt/0cbn(J) LTO Ultrium-3 COL029 writing, 28 GB, 4 sessions
/dev/rmt/1cbn(J) LTO Ultrium-3 COL028 ready for writing, done


$ mminfo -q 'volume=COL029' -r 'volume,written,%used'
volume written (%)
COL029 210 GB 53%
$ mminfo -q 'volume=COL028' -r 'volume,written,%used'
volume written (%)
COL028 384 KB 0.1%
 
This is obviously a (mis)configuration issue. In general you should check:
- Are there enough streams that some can be lead to the next device?
- Are the pools setup fine?
- Did you check only one device for the pool?

These are just the most obvious problem areas.
 
I understood your point. It's configured to take 'Default' pool. Seems both devices are configure in the pool. Here are the details:-

type: NSR pool;
name: Default;
comment: ;
enabled: [Yes] No ;
pool type: [Backup] Backup Clone Archive Archive Clone ;
label template: Archive Archive Clone [Default] Default Clone Full Indexed Archive Indexed Archive Clone
NonFull Offsite PC Archive PC Archive Clone Two Sided ;
retention policy: Decade Month Quarter Week Year ;
groups: Default comcast-database comcast-systems ;
clients: ;
save sets: ;
levels: full 1 2 3 4 5 6 7 8 9 incr manual ;
devices: /dev/rmt/0cbn /dev/rmt/1cbn ;
store index entries: [Yes] No ;
auto media verify: Yes [No];
Recycle to other pools: Yes [No];
Recycle from other pools: Yes [No];
volume type preference: 3480 3570 3590 3592 4890 4mm 4mm 12GB 4mm 20GB 4mm 4GB 4mm 8GB 4mm DAT72 8mm 8mm 20GB 8mm 5GB 8mm AIT 8mm AIT-2 8mm AIT-3 8mm AIT-4 8mm Mammoth-2 9490
9840 9840b 9840C 9940 9940B adv_file dlt dlt1 dlt7000 dlt8000 dst dst (NT)
dtf dtf2 file himt LTO Ultrium LTO Ultrium-2 LTO Ultrium-3 optical qic SAIT-1
SD3 sdlt sdlt320 sdlt600 SLR tkz90 travan10 tz85 tz86 tz87 tz88 tz89
tz90 tzs20 VXA ;
max parallelism: 0;
mount class: default;




How to verify that enough streams are configured. It seems it's using the default sessions.


type: NSR device;
name: /dev/rmt/0cbn;
comment: ;
description: ;
message: " ";
volume name: COL029;
media family: [tape] disk ;
media type: 3480 3570 3590 3592 4890 4mm 4mm 12GB 4mm 20GB 4mm 4GB 4mm 8GB 4mm DAT72
8mm 8mm 20GB 8mm 5GB 8mm AIT 8mm AIT-2 8mm AIT-3 8mm AIT-4 8mm Mammoth-2 9490
9840 9840b 9840C 9940 9940B adv_file dlt dlt1 dlt7000 dlt8000 dst dst (NT)
dtf dtf2 file himt LTO Ultrium LTO Ultrium-2 [LTO Ultrium-3] optical qic SAIT-1
SD3 sdlt sdlt320 sdlt600 SLR tkz90 travan10 tz85 tz86 tz87 tz88 tz89
tz90 tzs20 VXA ;
enabled: [Yes] No Service ;
read only: Yes [No];
target sessions: 4;
max sessions: 512;
parent jukebox: sl500;
cleaning required: Yes [No];
cleaning interval: 6 months;
date last cleaned: "Mon Sep 13 19:48:44 2010";
auto media management: Yes [No];
ndmp: Yes [No];
dedicated storage node: Yes [No];
remote user: ;
password: ;
hardware id: ;
CDI: Not used [SCSI commands];
TapeAlert Critical: ;
TapeAlert Warning: ;
TapeAlert Information: ;
device serial number: ;


Any help would be greatly appreciated.
 
O.K. it seems that you need to create at least 4 streams before NW will use the next device. However, you must ensure that the parallelism for the NW server is higher than 4 - and this was not the course on older versions. Just set it as high as possible - you will receive a message with the limiting factor.
 
Thanks 605. I can't say how grateful I'm. I changed the server parallelism from 4 to 10 as the server NW version I have is 7.3. Hope this will sort things. Do you want me to increase more till it throws out an error ?

type: NSR;
name: qfs002-gbl;
version: "Sun StorEdge(TM) Enterprise Backup 7.3,REV=190 Network Edition/15";
comment: ;
parallelism: 10;
 
2 x 4 = 8

So you will be fine with 10. In fact, Network Edition will allow you to raise the value to 32.

Now you must just ensure that you open enough streams ...
 
Thank you very much 605. You suggestions are really valuable. After I have changed to 10, it is started writing to both device.


/dev/rmt/0cbn(J) LTO Ultrium-3 COL016 writing, 18 GB
/dev/rmt/1cbn(J) LTO Ultrium-3 COL019 writing at 29 MB/s, 27 GB, 2 sessions

Now a stupid question, to "open enough streams", Are you is taking about save set or client parallelism? Please bear with me since I'm a beginner.

The client parallelism set to: 4;

Device:-
target sessions: 4;
max sessions: 512;
 
Compare to other days. It took long time to complete for fewer files.


09/27/10 08:30:53 full 1335907901 30 0 COL019
09/28/10 08:36:54 full 211921190 51 0 COL019
09/29/10 08:51:05 full 3366125049 13 0 COL016


Start time: Sun Sep 26 08:00:00 2010
End time: Mon Sep 27 00:39:21 2010

Start time: Tue Sep 28 08:00:00 2010
End time: Tue Sep 28 08:37:01 2010

Start time: Wed Sep 29 08:00:00 2010
End time: Wed Sep 29 08:51:11 2010

Could be what you said earlier 'open up enough streams'? I was wondering it should have made it faster as it used both devices...

My brain is constantly failing to work. :(
 
Distributing streams is a real tricky issue. In general, it follows these rules:
- Ensure that the Server Parallelism is high enough.
- Ensure that the client(s) send(s) enough streams
Usually, a Client Parallelism of 4 is fine.
- Ensure that the Device Parallelism is fine.
One again, 4 streams is usually o.k.
Do not mix too many streams!

However, the real outcome depends on a lot of other factors. The major issue is that you intend to stream the tape drive. For this purpose, a LTO3 needs 80MB/s (native) or 160MB/s (compressed), respectively. These rates are not achievable via the network! - Consequently, your tape drive must reposition the tape and wait until new data has arrived. This is what you should avoid. However, if you now distribute streams to more drives makes the situation even worse.

Assume, you receive xMB/s via the network. This is sent to only one tape drive. If you now add a second tape drive to the scenario, this will not affect the network transfer rate. However, the data rate for drive 1 decreases and the one for drive 2 remains poor. Worst case, you will end up in a scenario what you see right now.

To keep both tape drives streaming, you should first collect the data onto a local disk drive before you copy them to tape. This is also known as Staging. Unfortunately, this needs temporary disk space and a Disk Backup license.

Another alternative could be to use at least LTO4 drives as they can adopt to the transfer rate, at least in certain steps. Unfortunately, details seems to be totally secret as i could not find any document.

Why don't you try a Staging solution?

 
605, That's a awesome explanation. I got your point and understood why took longer than usual. I have also noted that after writing data, the tape on one of the device was rewinding and then gone to 'idle' mode. Now I know why. It waited till the other device complete the session and when next batch of data came, it started writing again.

Currently we are consolidating all the 4 data center in to one. When that complete, will suggest what you have said (staging solution).

Thanks once again.

 
I just want to mention the alternative that you can of course also increase the speed of you network. Right now 10Gb/s equipment is very expensive, but you could also 'bundle' several 1Gb/s connections.
 
Hello,

or you can also schedule many clients in the same group in order to have a 'big regular stream' to write to tape, thus minimizing the tape repositioning.

Denis
 
However, if you have already reached the network speed limits, this would not help.
 
Thanks 605 and denisfr.

As 605 pointed out, the bandwidth limit is reached. Will see after the migration work how the performance....

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top