In file2, assuming field2 is unique, read file2 into an array. Then, read file1 checking if each array element is between file1 and field2:
awk ' BEGIN {
while( getline < "file2" )
arr[$2]=$1
}
{
for(ind in arr)
if(arr[ind] > $1 && arr[ind] < $2)
print arr[ind]" "ind
} ' file1
I recommend using something like this:
# UNTESTED!
find . -name "RECPT-ZONE*" |xargs rm -f
-exec rm ... generates a Unix process for each file removed where xargs generates only one and xargs guarantees not to overflow the command line.
Travis:
I assumed he meant to copy the directory structure, but you might be right; maybe he wanted all files copied to a seperate directory.
Anyway, back in the day before Linux I used cpio a lot because of portability issues with tar. You can copy a directory structure using tar:
cd...
...directory:
cd /usr/my; find . -depth -print|cpio -pd /usr/mynew
What constitutes a text file - a file with a text extension? You can search just for files with a text extension something like this:
# UNTESTED
cd /usr/my; find . -type f -name "*.txt" -depth -print|cpio -pd /usr/mynew
...the string and uses timelocal to return
# the number of seconds from the Epoch.
# No error checking!
function seconds_from_epoch {
echo $*| perl -MTime::Local -ane '
my $epochseconds = timelocal($F[5], $F[4], $F[3], $F[2], $F[1] - 1, $F[0]);
print "$epochseconds\n"; '
}
dt1=20120915...
Since your data is on Windows and database is on Unix, why don't you look at a client-server solution like Ab Initio. A.I. programming is done with a Graphical Development Environment. There is a learning curve, but it's window's programming - not Unix scripting.
Ab Initio has a website you...
One way to create a shell script which dynamically creates the awk script based on a string - "8 21 34 47" in this case. Obviously there are enhancements that can be done to this:
#!/bin/ksh
file_name="theawk.ss"
echo "nawk ' BEGIN { FS=\",\" }
{
for(i=1; i <=NF; i++)
{
" > $file_name...
I find it easier to print out each field and then duplicate fields 8 and 21. Older awks have a limitation on the number of fields that awk supports, but this works for nawk on Solaris:
#!/bin/ksh
nawk ' BEGIN { FS="," }
{
for(i=1; i <=NF; i++)
{
if(i < 8 || (i > 8 && i < 21) || (i >...
'whoami' should work, but this parses the id command to get the real user:
# get the real user id
realuser=$(id|sed -e 's,^[^(]*(,,' -e 's,).*$,,' -e 1q)
case $realuser in
root | olded )
echo "root or olded"
;;
*)
echo "blah"
exit 0
;;
esac
I take it you have redirected to a file to see if you have an error?
This link offers two alternate commands that might work:
http://www.linuxquestions.org/questions/aix-43/how-do-i-get-a-cronjob-to-run-every-two-hours-in-aix-00-*-2-*-*-*-command-no-work-763034/
Using perl's matching operator, I can extract lines 3, 6, and 9 from a file:
perl -wnl -e 'print if $. !~ m/^(3|6|9)$/' datafile.txt
Is there syntax for the matching operator that allows deleting a range of lines, say from 3 to 9?
Thanks!
Ed
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.