Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

problems with tarring uplarge files (2GB +) 1

Status
Not open for further replies.

christhomas1977

Technical User
Apr 25, 2003
18
0
0
GB
Hi,

I wonder if anyone can help?

We are running vertias volume manager 3.5 on a solaris 8 sun V880 box.

I am trying to tar up large amounts of data under volume manager partitions (not veritas file system). When the tar (this is using GNU tar) gets to about 2 GB it bombs out with the following error:

/usr/local/bin/tar: Cannot write to /data/archive/28_05_03/ver_data_images.tar:
File too large
/usr/local/bin/tar: Error is not recoverable: exiting now

Similar errors are produced with just the normal solaris tar

thanks

Chris

 
Try using the tar command that comes with the OS: /usr/bin/tar or /usr/sbin/tar (usr/bin/tar is a link).

The Sun man pages say that /usr/bin/tar is large file aware, so it should be able to handle a file larger than 2 GB.



 
solaris tar and gnu tar are both largefile aware.

The error message looks to me that it's coming from the file system.

Is your file system flagged largefile ?
 
Thanks for the suggestions.

How do you check whether the file system is flagged largefile?

If it isn't, how do u change it?

I have used the standard tar os commands but it doesn't work.

 
Can we assume that you have sufficent space for the process to complete (I guess tar requires some temp rescorce over & above the file size generated?).

L.
 
hi,

yes the partition I am copying to is 58GB. I have 10 GB of swap space aswell.

Thanks

Chris
 
Ok this may be useful .... cut & paste from some of my course notes ....

You can check a filesystem logical block size with:
"df -g", e.g for a "/data" filesystem with aprox 1GB:
# df -g /data
/data (/dev/dsk/c0t1d0s0 ):8192 block size 1024 frag size
2036764 total blocks 2036746 free blocks
2016380 available 251008 total files
251004 free files
8388616 filesys id ufs fstype 0x00000004 flag
255 filename length
logical block size of /data is 8192 bytes or 8k, note that total blocks in df are in physical
blocks, (512 bytes size):

2036704/2 = 1018352*1024 = 1042823168 bytes

also note that blocks in fsck are logical, except frags and used blocks i.e:
# umount /data
# fsck /data
** /dev/dsk/c0t1d0s0
** Last Mounted on /data
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
2 files, 9 used, 1018373 free (21 frags, 127294 blocks, 0.0% fragmentation)

127294*8192=1042792448+(21+9)*1024=1042823168 bytes
21+9 is frag + used (used and frag size is 1024 bytes) as "df -g" shows for frag size.

You can change logical block size for a ufs filesystem in Solaris with "-b" flag in newfs command (when you create a new filesystem), however note that Solaris sun4u architecture does not support the 4096 block size.

 
mount -v will show you whether it is mounted with the 'largefiles' mount option. You can add this to the options in /etc/vfstab if it isn't there.

Annihilannic.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top