Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

How do I us CAT on many files (400+) and the file names are long. 2

Status
Not open for further replies.

tpbjr

MIS
Oct 8, 2004
120
0
0
US
First of all I would like to thank everyone for your help. I really do appreciate it.

This is my problem. I have over 400 files and the file names could be long as 'this_is_a_test_for_the_longest_frig_n_possible_file_I_could_thing_of20050103132126'. Others have helped with suggestions but they are not working. Below is some things I have found out and a partial solution to my problem. Can you please fill in the blanks...

When I do a cat command with a wild card (this_is_a_test*) on the directory with 400+ files with file names as long as above, the following message occurs, 'arg list too long'.

However I can do a ls > list.lst and store the entire directory list in the list.lst file.

So with that said, how can I read this list.lst (contains a list of all files I want to cat together), looping through each line, storing it in a variable then using the variables content with the CAT command. I found the CAT command works if I type the entire file name, so if I can just get a variable to hold the entire file name and use that variable with the CAT command my problem will be resolved. The FIND command does not work for me because it read sub-directories and I can not read any contents of sub-directories.



Thank you for all your help

Tom
 
pls don't start a new thread on existing query.

change this
PHV said:
If the real problem is 'arg list too long':
(ls | grep 'myfiles.*\.dat' | xargs cat) > newbigfile

to

Code:
(find . -type f -name 'this_is_a_test*' | xargs cat) > newbigfile

vlad
+----------------------------+
| #include<disclaimer.h> |
+----------------------------+
 
Sorry about a repost, I did not know you had to stay in one thread.

Can you tell me how you kept the FIND command from looking in sub-directories? Also, what does the XARGS do?

Thank you for all your help

Tom
 
Can you tell me how you kept the FIND command from looking in sub-directories?

I didn't. You probably don't have any file in the sub-directories matching the the wild carded '-name'.

If you did, you could've re-written it like so:
Code:
(find . -type f -name 'this_is_a_test*' -prune | xargs cat) > newbigfile
Also, what does the XARGS do?

man xargs said:
DESCRIPTION
The xargs utility constructs a command line consisting of the utility and argument operands specified followed by as many arguments read in sequence from standard input as will fit in length and number constraints specified by the options. The xargs utility then invokes the constructed command line and waits for its completion. This sequence is repeated until an end-of-file condition is detected on standard input or an invocation of a constructed command line returns an exit status of 255.

vlad
+----------------------------+
| #include<disclaimer.h> |
+----------------------------+
 
Thanks for the addition information.

I included the -prune arg and it did not work.

The newbigfile was 380133 before I added a subdirectory with a copy of one of the files. After I added the subdirectory and file to the subdirectory the file was was 381672.

Any ideas???

Thank you for all your help

Tom
 
On my system, the "-prune" option didn't have the expected affect, but using "-level 0" prevented the find command from traversing below the current directory.
 
Code:
(find . ! -name . -prune -type f -name 'this_is_a_test*' | xargs cat) > newbigfile

vlad
+----------------------------+
| #include<disclaimer.h> |
+----------------------------+
 
vgersh99,

That worked just great, but I forgot about one of the reasons, I thought I would need to do the reading of a file that lists the files in the directory (refer to my original question/request).

Because another processes is actively placing files in this folder all the time (I can not predict the times), I need to remove each file after it is concatenated so I do not process it again or delete a new file that has arrived since the FIND command began it's execution.

Does this make sense?

Thank you for all your help

Tom
 
Well.... I don't quite understand the reasoning behind the requirement, but.... here's the modified version: file 'list.lst' will contain the list of all the found files:

Code:
(find . ! -name . -prune -type f -name 'this_is_a_test*' | tee list.lst | xargs cat) > newbigfile

vlad
+----------------------------+
| #include<disclaimer.h> |
+----------------------------+
 
If you stick to simple concepts the solution is trivial. Compare this to my answer in your original thread:
Code:
<your find command goes here> | while read filename > destfile
do
   cat "$filename"
   rm "$filename"
done

P.S. This is *not* a useless use of a cat.
 
Another method that might work is to set positional parameters, e.g...
Code:
set this_is_a_test*
while [[ $# -gt 0 ]]
do
   cat $1
   rm $1
   shift
done > newbigfile
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top