I would like to have this code in a file so I can re-run by the -f command rather than using command line statement.
I now have the code to work properly with printing the H records but they only print on the first file output and not the following files.
Here's my current code:
{...
Thanks Feherke,
This part of my code works properly with the H records being passed to all output files but I cant get the non H records to print entirely for each output file:
FNR==1 {
hdr_count = 0;
while (substr($1,1,1) == "H")
{...
Feherke/others
I changed the code and now I am able to get the "H" records printed out at the top of each output file but now I only get the last record for the ones per group based on field 2 of data.
Here is the input file I am using:
H1
H2
H3
H
H 44
GROUP1 aa 1
GROUP1 aa 2...
Thanks Feherke.
this works except I still would like to pass the H records at the begining of the input file to each of the output files.
Thanks again.
Thanks Feherke for responding.
The script would create 3 files - AA.DAT, BB.DAT AND CC.DAT.
Each file contains the records that relate to the field 2 values.
Thanks
I have a file that contains some headers at the first of the file starting with "H" that I want to write to the first of all of the following files that are created based on the non H records field 2.
Everything works like I want it to except for the multiple H records printed to each file.
Any...
Thanks CaKiwi,
I changed your code of:
if (match($0, "SUB ")) {
count=0;type ="SUB";group_id++;subt=$2;if (int(subt/10)==4)subt=4;next
}
to:
if (match($0, "SUB ")) {
count=0;type ="SUB";group_id++;subt=$2...
Thanks CaKiwi.
The output is slightly off on the count and format when I have input such as after the 4th point:
SUB 40
XY 1000000 2000000
XY 2226774 339553.71
XY 2226782 339255.27
XY 2226772 339594.57
XY 2226772 339594.57
XY 2226774 339553.71
XY 2226782 339255.27
XY 2226772 339594.57
XY...
I have a data file that contains area polygons and sub polygons that I write to a different format. My script works except where there is a sub polygon that contains only 4 xy pairs, I need to add the first or last xy of that group to the beg or end in order for another program to handle a...
How can I simply use the results of a match or index(substr function to find out the field number where the match is found?
I want to increment by one to this result and then store in a variable the next field values inorder to print later.
Thanks
Thanks PHV!
It did remove the syntax error but the format for the output should be like:
"ITEM1","",-6
-79.470720401289327,40.186449458386832
-79.470817952283511,40.18634058672292
-79.470915474880968,40.186231700890886
-79.471012969863153,40.186122801305608
-79.471110437609283,40.186013888181662...
Can someone help me with a syntax error on this small gawk script?
My input is like:
"ITEM1","",-6
-79.471207878102078,40.185904961536473
-79.471110437609283,40.186013888181662
-79.471012969863153,40.186122801305608
-79.470915474880968,40.186231700890886
-79.470817952283511,40.18634058672292...
Thanks PHV for responding.
Here's what I have but it is not looping through the control file like it should. It is getting the last xy pair and not for each record that matches on fields 2 and 3.
BEGIN{
if (!fn) fn= "test_xy_input.dat"
while ((getline < fn) > 0) {
split($0,b," ")...
Is there a simple awk script to do the following?
I have a data file 1 such as:
FIELD1,FIELD2A,100,1000,10000
FIELD1,FIELD2A,101,2222,33333
FIELD1,FIELD2A,133,44444,5555
FIELD1,FIELD2,400,1,1
FIELD1,FIELD2E,166,1,1
and file 2 is like:
FIELD__1 FIELD2A 100 0.0 0.0 0.000000 1475...
I have the following data file which has coordinate and value records that describe (1)longitude, (2)latitude and (3)z value :
-86.01550275487955,30.25826049996856,1.940,"","",""
-86.01541483202962,30.25795435801949,1.940,"","",""
-86.01442916957223,30.25524060127955,1.940,"","",""...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.