Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Linux Backup Options

Status
Not open for further replies.

centibite

Technical User
Oct 21, 2008
11
US
OS: Red Hat Linux release 8.0 (Psyche)

I am trying to backup our 1 Linux server to our Windows file server via a Windows Share. In the past we were using tape backup for all of our servers and this 1 Linux box is the only one left using tapes. I have done a bit of research and am still a bit overwhelmed. I have the windows share mounted via "mount //server/share /mnt/Backup -o credentials=/home/backup/.smbpasswd"

The current back up script uses dump which gives this error "DUMP: Cannot open output "/mnt/Backup/": Is a directory
DUMP: Do you want to retry the open?: ("yes" or "no")
every time when I try to write to the windows share with this command /sbin/dump -0u -f /mnt/HarrisBackup/ /usr

So I tried tar (tar -cvzf /mnt/Backup/Backup_etc.tar.gz etc) in which I get a variety of erros like broken pipe and out of disk space. I know the windows server is not out of space and there are no user disk quota restrictions.

At this point I have been researching on the net and have found boards that say dump doesn't work with the windows file system and tar is a bad idea.

Please help . . . .
 
I think your problem with the dump command is you are specifying the directory to backup to (/mnt/backup), but you need to name a file in that directory, such as
/mnt/backup/unixdump. The dump command cannot create a file called /mnt/backup since there is already a directory by that name.
 
Ah, that does make since. I changed the command to:

/sbin/dump -0u -f /mnt/HarrisBackup/HarrisBackup_usr /usr

It almost worked, pulled 1.99G of 2.1G. It asked to start the next tape, but there is no tape it is a Windows Share so I will have to look the man page for that. It also says the file is to large as well. This seems to be more trying that I expected, but I will keep looking into it. This is the output from the first run:

# /sbin/dump -0u -f /mnt/HarrisBackup/HarrisBackup_usr /usr
DUMP: Date of this level 0 dump: Tue Oct 21 14:41:04 2008
DUMP: Dumping /dev/sda10 (/usr) to /mnt/HarrisBackup/HarrisBackup_usr
DUMP: Added inode 8 to exclude list (journal inode)
DUMP: Added inode 7 to exclude list (resize inode)
DUMP: Label: /usr
DUMP: mapping (Pass I) [regular files]
DUMP: mapping (Pass II) [directories]
DUMP: estimated 1974429 tape blocks.
DUMP: Volume 1 started with block 1 at: Tue Oct 21 14:41:06 2008
DUMP: dumping (Pass III) [directories]
DUMP: dumping (Pass IV) [regular files]
DUMP: write error 2097170 blocks into volume 1: File too large
DUMP: Do you want to rewrite this volume?: ("yes" or "no") yes
DUMP: Closing this volume. Prepare to restart with new media;
DUMP: this dump volume will be rewritten.
DUMP: Closing /mnt/HarrisBackup/HarrisBackup_usr
DUMP: Volume 1 completed at: Tue Oct 21 14:46:06 2008
DUMP: Volume 1 2097180 tape blocks (2048.03MB)
DUMP: Volume 1 took 0:05:00
DUMP: Volume 1 transfer rate: 6990 kB/s
DUMP: Is the new volume mounted and ready to go?: ("yes" or "no") yes
DUMP: Volume 1 started with block 1 at: Tue Oct 21 14:41:06 2008
DUMP: dumping (Pass III) [directories]
DUMP: dumping (Pass IV) [regular files]
DUMP: write error 2097170 blocks into volume 1: File too large
DUMP: Do you want to rewrite this volume?: ("yes" or "no") n
DUMP: Do you want to start the next tape?: ("yes" or "no") yes
DUMP: Closing /mnt/HarrisBackup/HarrisBackup_usr
DUMP: Volume 1 completed at: Tue Oct 21 14:55:06 2008
DUMP: Volume 1 2097180 tape blocks (2048.03MB)
DUMP: Volume 1 took 0:14:00
DUMP: Volume 1 transfer rate: 2496 kB/s
DUMP: Change Volumes: Mount volume #2
DUMP: Is the new volume mounted and ready to go?: ("yes" or "no") yes
DUMP: Volume 2 started with block 2097151 at: Tue Oct 21 14:55:16 2008
DUMP: Volume 2 begins with blocks from inode 737370
DUMP: 100.00% done, finished in 0:00
DUMP: Closing /mnt/HarrisBackup/HarrisBackup_usr
DUMP: Volume 2 completed at: Tue Oct 21 14:55:20 2008
DUMP: Volume 2 44460 tape blocks (43.42MB)
DUMP: Volume 2 took 0:00:04
DUMP: Volume 2 transfer rate: 11115 kB/s
DUMP: 2141610 tape blocks (2091.42MB) on 2 volume(s)
DUMP: finished in 207 seconds, throughput 10345 kBytes/sec
DUMP: Date of this level 0 dump: Tue Oct 21 14:41:04 2008
DUMP: Date this dump completed: Tue Oct 21 14:55:20 2008
DUMP: Average transfer rate: 6805 kB/s
 
Cut the last line of the output, sorry.

DUMP: DUMP IS DONE
#
 
dd might be trying to enforce a 2GB output file limit, which is regarded as a "tape" unit as well.

googling will probably guide you on how to address this... not sure if it's being imposed by the Windows destination or by dd as a practice.

D.E.R. Management - IT Project Management Consulting
 
I am not sure what "dd" means, but I tried to tar a directory that was over 2Gig and it stopped at 1.99Gig as well. So atleast I have a common issue, that it can't transfer files greater than 2Gig. I do appreciate the suggestions though.
 
Do you have the 'a' option (autosize)? This turns off the tape calculation so it doesn't think the dump is going to multiple volumes and needs to restrict itself on the amount on each volume.
 
elgrandeperro,

Per your suggestion I added the -a option as shows below, but it still stops writing to disk at the exact same file ize. It does the same thing when using tar to backup the same directory. I know that the Windows Server is NTFS, so it doesn't have a file size restriction. Does linux have a maximum file size restriction?

There is just one thing bugging about this issue, when we backup to tape there is no issue. We backup to 1 tape. . .

/sbin/dump -0ua -f /mnt/Backup/Backup_usr /usr

I continue to receive this same error message:
DUMP: write error 2097170 blocks into volume 1: File too large

I found a similar issue on another message board from a person who was unable to create dump files larger than 2G, but it never received a reply.

 
So I'm back to my suggestion that perhaps the Windows Share's underlying OS has a 2GB file size limitation being imposed upon 'dump'

(sorry for the prior 'dd' reference, similar task/tool)

D.E.R. Management - IT Project Management Consulting
 
Is this any help?

FAT

Access available on DOS, all versions of Windows, Windows NT, Windows 2000 and Windows XP

Partitions can from 1.44 megabyte (MB) to 4 gigabytes (GB) in size

Maximum file size is 2GB

--------------------------------------------------------------------------------

FAT32


Access available on Windows 95 OSR2, Windows 98(SE), Windows Millennium Edition, Windows 2000, and Windows XP

Partitions can be from 512MB to 2 terabyte (TB) in size

Maximum file size is 4GB
Note that on Windows XP the maximum partition size is 32GB


--------------------------------------------------------------------------------

NTSF

Access available on Windows NT, Windows 2000 and Windows XP

Partitions can be from 10MB to 2TB, or greater

Maximum file size can be as large as the partition the file is on

Retrieved from "
 
A man of dump also indicated the -M option will output multi volume names to get around 2GB limits.
 
So perhaps you need to run it out through a pipe using "split" and restore using "join"? My experience with many versions dump/restore is as long as the output is streamed, both in and out (even with interactive restore) it will work.
 
Just a thought but you indicated that you were unable to create a tar file larger than 2GB on the share

Run "mount" by itself and you should see the FS type that the share is mounted as.

The Windows partition itself is probably NTFS (Unless it is pretty out of date)

Not sure what the mount command (really the smbclient backend) on Red Hat 8 defaults to FS wise but if you do not provide the -t option it tries to determine the best fit on its own.

You could try "mount -t cifs //server/share /mnt/Backup -o credentials=/home/backup/.smbpasswd"

Not sure if FS type cifs is supported at Red Hat 8 (Don’t have any copies laying about) if not try smbfs or ntfs
 
thalligan
The "mount" command yields the following results:
//server/share on /mnt/backup type smbfs (0)
However when I add -t cifs or ntfs, it displays "file type not supported."


elgrandeperro
I don't know how "split" works, I tried and it keeps telling me that the byte size is invalid. I will keep playing with it.


bluemrregan
Using the "-M" option does create multiple dump files, but it still gives the error about file size limit. Then it ask if I want to rewrite and eventually after answering yes a few times it starts on the next file, so I am not sure if it is workning or not.

 

Yeah, so you need to run the split "cd'd" to the mount point.

Then

dump 0af - /usr | split --bytes=1000m ( I don't think split takes g for the gig flag)

This will make 1g files, prefixed with the the letter x (can be changed).

So to do a restore, you can use simple cat
cat x* | restore -if - (for interactive)

I believe the shell will sort x* in alphabetic order, so the files will be in the correct sequence.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top