Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Backup with UFSDUMP - How to include remote machine ?

Status
Not open for further replies.

Exie

Programmer
Sep 3, 2003
156
0
0
AU
Hi,

I've got a backup script on Solaris 9 that looks like so:
Code:
for fsystem in $FSYSTEMS
do

   echo "\nBacking up $fsystem to $TD - `date`...\n"
   ufsdump 0uf $TD $fsystem
   echo "\n$fsystem backed up to $TD - `date`."

done
... this seems to work ok, but I now have a new server that I want to back up on the same tape (LTO).

Whats the best way to do this ? I thought about NFS, but not sure that would work.

Both systems are running Solaris, though the remote machine is Solaris 10, and the one with the tape drive is Solaris 9.
 
ufsdump supports rmt, remote tape daemon. Read under ufsdump under the "f" option. ufsdump must be initiated from the "new server" side but you could use ssh to kick it off from your backup server.

Always use the non-rewind device. If you run from cron and it doesn't fit all on one tape, it will ask (but from cron there is no way to reply). From interactive, there is an option for operator intervention.
 
Thanks, thats awesome!

I read up about using rsh, but already had shared key's for SSH so am trying this:
[root@VICEVDB01:/var/tmp]# ssh root@vicevdb02 ufsdump 0f - / | ssh root@vicevdb01 dd of=/dev/rmt/0cn

... that is ... from VICEVDB01 (with tape drive attached) it will SSH over to the second server (VICEVDB02) and run ufsdump then pipe it over ssh back to VICEVDB01 through dd onto the tape.

My only issue is, I get prompted for the root password on VICEVDB01 ... presumably this is when the remote server is trying to make the connection back to the tape drive.
Code:
[root@VICEVDB01:/var/tmp]# ssh root@vicevdb02 ufsdump 0f - / | ssh root@vicevdb01 dd of=/dev/rmt/0cn
root@vicevdb01's password:   
  DUMP: Date of this level 0 dump:  6 June 2008 12:43:52 PM
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/rdsk/c0t0d0s0 (vicevdb02.TFAPAC.TontineFibres.com.au:/) to standard output.

So to debug this, I went over to VICEVDB02 and tried:
[root@VICEVDB02:/]# ufsdump 0f - / | ssh vicevdb01 dd of=/dev/rmt/0cn
... and it worked!

Does this mean it's an evironmental thing... using SSH over to my remote machine isnt setting up all the variables for the SSH connection back ?

... any thoughts ?
 
Presumably it simply isn't set up to allow ssh from vicevdb01 to itself... which is unsurprising because it is unnecessary, why not just use this:

Code:
ssh root@vicevdb02 ufsdump 0f - / | dd of=/dev/rmt/0cn

There is no "connection back" to the tape drive as you describe it... you are simply piping the stdout of your ssh command to the tape drive. The | is processed on the local host.

Annihilannic.
 
I have never been a fan of using straight dd to write the tape. Ufsdump is always block size =20 (10Kb) and if you read over the net, dd blocks max to 512. Then one situation, I was getting stuff in 128 byte increments. After trying some of the conversion routines, I realized that it doesn't do what i wanted and I wrote a little C "tapewriter" that always posted the read to fill a buffer, then always write a full buffer. So if I got a read of 128, I would post the next read to be 10K - 128 bytes long and so on 'til I got a 10Kb buffer to write.

That way, I could say that the tape was in ufsdump blocksize=20, and could be extracted just with ufsdump (or remote ufsdump)
 
I've read through some public domain dd code, and if the read is larger than the obs size then it blocks correctly, but if the read is smaller, it does not. With output to a file, it doesn't matter, but to a tape, it does matter. I believe this is the problem I saw many moons ago.
 
Strange... I did the following test (although I'm not sure it's valid because it's a) on HP-UX, not Solaris, and b) not to a tape device) and it seems to output chunks in the expected block size of 10240:

[tt]# ( while sleep 1 ; do dd if=/dev/zero count=1 bs=7680 2> /dev/null ; done ) | dd obs=10240 of=t &
[1] 3694
# while sleep 1 ; do ls -l t ; done &
[2] 3731
# -rw-r--r-- 1 root sys 61440 Jun 10 04:19 t
-rw-r--r-- 1 root sys 61440 Jun 10 04:19 t
-rw-r--r-- 1 root sys 71680 Jun 10 04:19 t
-rw-r--r-- 1 root sys 81920 Jun 10 04:19 t
-rw-r--r-- 1 root sys 92160 Jun 10 04:19 t
-rw-r--r-- 1 root sys 92160 Jun 10 04:19 t
-rw-r--r-- 1 root sys 102400 Jun 10 04:19 t
-rw-r--r-- 1 root sys 112640 Jun 10 04:19 t
-rw-r--r-- 1 root sys 122880 Jun 10 04:19 t
-rw-r--r-- 1 root sys 122880 Jun 10 04:19 t
-rw-r--r-- 1 root sys 133120 Jun 10 04:20 t
-rw-r--r-- 1 root sys 143360 Jun 10 04:20 t
-rw-r--r-- 1 root sys 153600 Jun 10 04:20 t
-rw-r--r-- 1 root sys 153600 Jun 10 04:20 t
-rw-r--r-- 1 root sys 163840 Jun 10 04:20 t
-rw-r--r-- 1 root sys 174080 Jun 10 04:20 t
-rw-r--r-- 1 root sys 184320 Jun 10 04:20 t
-rw-r--r-- 1 root sys 184320 Jun 10 04:20 t
kill %1
# -rw-r--r-- 1 root sys 194560 Jun 10 04:20 t
-rw-r--r-- 1 root sys 194560 Jun 10 04:20 t
-rw-r--r-- 1 root sys 194560 Jun 10 04:20 t
-rw-r--r-- 1 root sys 194560 Jun 10 04:20 t
-rw-r--r-- 1 root sys 194560 Jun 10 04:20 t
kill %2
[1] - Terminated ( while sleep 1 ; do dd if=/dev/zero count=1 bs=7680 2> /dev/null ; done ) | dd obs=10240 of=t &
#
[/tt]

Annihilannic.
 
Okay, I did some stracing on Linux and now am convinced that it does indeed do the writes correctly with obs=X. So I would suggest that option be used when dd-ing dump images to tape.
 
Thanks folks,

Annihilannic's answer sorted it, the pipe was been processed on the originating box.

My only issue now is performance... not sure if theres much I can do about this as its running over a 1GB lan. Here's how the performance runs from local disk to the tape:
DUMP: 52937470 blocks (25848.37MB) on 1 volume at 42028 KB/sec

.. here's the dump from the remote session:
DUMP: 81983678 blocks (40031.09MB) on 1 volume at 2170 KB/sec

I tried changing the SSH session to use -C for compression to reduce the LAN traffic, but it didnt help. Any thoughts on how I could juice up the performance ?
 
That's pretty bad... what throughput do you get with a normal scp between those two boxes?

How about if you dump to a file rather than tape on the destination box, any speed difference?

Annihilannic.
 
Just a quick update, used SCP to rip 7.5GB in about 6:30 which by my crude calculation makes 18925KB/s .. alot faster than the 2170 above.

Will need to find some space and will then try ufsdump over the network to a file, see how that goes.
 
I believe that both the input block size of dd and the pipe are slowing the I/O. Pipes usually are about 8K big, so that limits them and dd is reading through the standard I/O library and its own default buffer size.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top