I have a coordinate file that contains vector points that draw closed polygons. The first and last data point have the same coordinate location. I am reformatting the file to be used for input into another application.
Here's a test data set with the first record a column counter...
Vlad,
Thanks for help.
I tried this with the following:
{
if (FNR==1) {
if (NR>1) close(fn)
gsub(/([,]+|[ ]+)/, "_");
gsub(/"/, "");
print;
}
}
and ran it like gawk -f script *.dat
I got an output of
bbb
aaaaaaaa
It did not substitute the space...
Hi,
I have done a search on previous submissions and found some help on this but my example doesn't quite work.
I want to replace from all filenames in a directory all commas and 1 or more whitespace with a single underscore.
If the file name has a " in it, i want to just remove it...
I have found some similar posts but not quite like my problem.
I have a data file 1 that looks like:
00001 0 000 xxxxx yyyy
...numerous records of data
00020 0 000 xxxxx yyyy
...numerous records of data
00033 0 000 xxxxx yyyy
...numerous records of data
then I have a file 2 that looks like...
Here is a sample display of various vector lines that have end points (1 or 2 ) as they would be mapped in an xy reference. For ease of displaying these first/last points the y is constant for each first/last point on each vector line. Only the x is different.
-------------
--------------...
I have a series of random placed xy coordinates in a file that reflect 2 points for opposite ends of a vector line.
The input data files looks similar to this:
1842 1841.5 1 aaa
2338.5 1841.5 2 aaa
1891.5 1922.5 1 ccc
2394.5 1928.5 2 ccc
1798.5...
I have a file1:
aaa aaaa aa aaaa a
bbb bbbb bb bbbb b
ccc cccc cc cccc c
ddd dddd dd dddd d
eee eeee ee eeee e
fff ffff ff ffff f
ggg gggg gg gggg g
hhh hhhh hh hhhh h
I have a file2:
999.00000 999.00000 999.00000 2
999.00000 999.00000 999.00000 4
999.00000...
Could anyone give me a hint on how to use AWK to do the following?
I have a series of files that I want to grep out comment cards that begin with # and then write the output file name with a concatenated extension.
Example:
grep -v "#" input_file_name.dat >...
Thanks,
I really need a one liner to do the match statement, since I have a script that would include several hundred of these similar match and print statements.
Just to clariy:
I have a file that has similar but unique character strings such as:
123-2N
123-2N1
123-2N11
123-2Nb
223-2N
on...
Thanks for the response.
Can you also hint on how I would search for the field=
123-2N11 and not the others or all of the records that would contain an start with 1 and have a "N" somewhere following.
Thanks
I am having trouble understanding how regular expressions work using AWK.
I am trying to match certain characters in a substr field.
For example:
field 1=123-2N
if (match ($1,/^[1][0-9][0-9].?[N])) { print}
In my input file I have several different combinations of like data, such as:
field...
I am trying to break down an address file that consists of 3 address's per group and then this group of 3 are separated by 2 blank lines or newline character.
I want to print each separate address of 4 lines in a tab or comma delimited output ( one complete address per record)
Here's a dummy...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.