Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Creating folder structures when copying files

Status
Not open for further replies.

MCubitt

Programmer
Mar 14, 2002
1,081
GB
This is a long shot and I have a feeling I have asked in the past with a negative answer but...

In UNIX, using
cp <source> <destination>
If the destination path does not exist, can it be created easily?

In other words,
cp -p /oracleindex/IFSD/users01.dbf /backup/oracleindex/IFSD/users01.dbf
cp -p /redolog1/IFSD/redo01.log /backup/redolog1/IFSD/redo01.log

would create /backup/oracleindex/IFSD/ if it did not exist and /backup/redolog1/IFSD/redo01.log if it did not exist.

If it cannot be done with the cp command, what is the suggested method?

(manually creating them before hand is a last resort!)





Applications Support
UK
 
I think using "tar" is your best bet. Like so:

umask 0
cd /
tar cvf - ./oracleindex/IFSD/users01.dbf|(cd /backup;tar xf -)

This will preserve file date and ownership as well.
 
#!/bin/ksh

mkdir -p $(dirname /backup/redolog1/IFSD/redo01.log) 2> /dev/null

cp -p /redolog1/IFSD/redo01.log /backup/redolog1/IFSD/redo01.log

vlad
+----------------------------+
| #include<disclaimer.h> |
+----------------------------+
 
Hmm.. interesting idea but I see a few potetenial pitfalls.

I am basically making a flash backup of an Oracle database. The database copies is variable (we have 6 and it could be any one of them).

It copies the structure to another area of disk.

Using a tar will mean we have the original, the tar and the duplicate, at least for a moment. With the size of our DB that might cause an issue. However, that is not the final straw. Plus if we do them on a file by file basis it might not be noticable.

The other possible issue is in downtime. Is it faster to tar a bunch of files or to copy them? We shut the DB down while we flash copy it, we are attempting to minimise disruption. If tar'ing will double the time a copy will take, it might not be an option. Does anyone have any input on that, perhaps it's insignificant?

However, I am being tempted down the tar route, I have to say!

thanks




Applications Support
UK
 
vlad, aha.. so thank you again.

Let me get this right..

mkdir -p $(dirname /backup/redolog1/IFSD/redo01.log) 2> /dev/null

cp -p /redolog1/IFSD/redo01.log /backup/redolog1/IFSD/redo01.log
the mkdir takes just the directory name from the string "/backup/redolog1/IFSD/redo01.log" giving ""/backup/redolog1/IFSD/"

the 2> will direct any error (eg exists) to null (nowhere).



cp -p /redolog1/IFSD/redo01.log /backup/redolog1/IFSD/redo01.log"
This will copy the file.

Thanks very much


Applications Support
UK
 
as motoslide posted, either tar OR cpio might be valid options as well.

Using a tar will mean we have the original, the tar and the duplicate, at least for a moment.

there's no 'tar' - the 'creating' tar does NOT create an archive - it simply "pipes" the archive to the "creating" tar.

vlad
+----------------------------+
| #include<disclaimer.h> |
+----------------------------+
 
Oh I see, how unusual!

It's just the performance I guess.. I'd imagine TARing 20Gb and copying the result and unTARing would take considerably longer than just copying 20Gb.

I prefer the mkdir option, at least for now.




Applications Support
UK
 
My guess is that you are right. The "copy" might be faster than the "tar".
The advantage of "tar" is that it will re-create entire directory structures as required. In your case, where you are really just copying 2 files, it's benefits wouldn't justify the additional overhead.
Having said that, I'd be curious to know the time difference on a 20GB file between "cp" and "tar". As Vlad stated, we aren't really creating a third copy, just piping from one command to the next.
 
Moto,

There are more than two files, in reality, the list looks something like this:
/oracledata/EDI/control01.ctl
/oracledata/EDI/control03.ctl
/oracledata/EDI/EDI.dbf
/oracleindex/EDI/control02.ctl
/redolog1/EDI/redo01.log
/redolog1/EDI/redo03.log
/redolog2/EDI/redo02.log
/oracle/app/oracle/oradata/EDI/drsys01.dbf
/oracle/app/oracle/oradata/EDI/indx01.dbf
/oracle/app/oracle/oradata/EDI/system01.dbf
/oracle/app/oracle/oradata/EDI/temp01.dbf
/oracle/app/oracle/oradata/EDI/tools01.dbf
/oracle/app/oracle/oradata/EDI/undotbs01.dbf
/oracle/app/oracle/oradata/EDI/users01.dbf
/oracle/app/oracle/oradata/EDI/xdb01.dbf
/oracle/app/oracle/product/9.2.0.1.0/dbs/spfileEDI.ora
/oracle/app/oracle/admin/EDI/pfile/initEDI.ora
/oracle/app/oracle/product/9.2.0.1.0/dbs/lkEDI

Thanks


Applications Support
UK
 
Why not just a
cp -r directory

Or are there other files in the directories your not wanting to copy?

I've used tar before for a larger set of files. cpio is handy if the backup is being written to a different type of device like a tape or such.
 
Run with
awk -f backup.awk backuploc="/foo" rscript="file2" data >file1

backup.awk:

Code:
BEGIN {
  FS = OFS = "/"
}

# Skip lines with "temp*.dbf".
toupper($0) ~ /TEMP[^\/]*\.DBF$/ { next }

{
  backup = backuploc $0
  printf "mkdir -p $(dirname %s) 2> /dev/null\n",
    backup
  printf "cp -p %s %s\n", $0, backup
  DB = $(NF-1)
  $(NF-1) = "TOBEDECIDED"
  sub( "^" DB, "TOBEDECIDED", $NF )
  printf "cp -p %s %s\n", backup, $0  >rscript
}
Let me know whether or not this helps.

If you have nawk, use it instead of awk because on some systems awk is very old and lacks many useful features. Under Solaris, use /usr/xpg4/bin/awk.

For an introduction to Awk, see FAQ271-5564.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top