dickiebird
Programmer
Hi Guys
With a flat file consisting of :
0001,123
0002,023
0002,989
0003,123
0004,093
0005,030
0005,987
0006,030
0007,939 etc etc
I want to print all unique field 1, but only the second listed where there's a duplicate field 1.
So the above would end up like :
0001,123
0002,989
0003,123
0004,093
0005,987
0006,030
0007,939 etc etc
The first listed 0002 has gone , as has the first 0005.
I got this far and then cried :
awk ' BEGIN {FS = "," }
{
if(NR==1) # 1st line is OK to print anyway
{
print $0;
next;
}
if (f1sav=$1)
{
print $0;
}
else
{
print f0save
}
f1sav=$1;
f0save=$0;
}' allext > allexta
Whaddya think I need - apart from a brain implant ????
DB
Dickie Bird
db@dickiebird.freeserve.co.uk
With a flat file consisting of :
0001,123
0002,023
0002,989
0003,123
0004,093
0005,030
0005,987
0006,030
0007,939 etc etc
I want to print all unique field 1, but only the second listed where there's a duplicate field 1.
So the above would end up like :
0001,123
0002,989
0003,123
0004,093
0005,987
0006,030
0007,939 etc etc
The first listed 0002 has gone , as has the first 0005.
I got this far and then cried :
awk ' BEGIN {FS = "," }
{
if(NR==1) # 1st line is OK to print anyway
{
print $0;
next;
}
if (f1sav=$1)
{
print $0;
}
else
{
print f0save
}
f1sav=$1;
f0save=$0;
}' allext > allexta
Whaddya think I need - apart from a brain implant ????
DB
Dickie Bird
db@dickiebird.freeserve.co.uk