Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Comparing Two files using awk and a while loop

Status
Not open for further replies.

RaoulZa

Technical User
Sep 5, 2002
14
GB
Hi

I'm fairly new to UNIX, and I was after a bit of help.

I have two files identical but for some field changes on certain lines. What I want to do is create a while loop that feeds in the two fields i am checking on, and puts them into an awk as variables that i can use to compare to the other file.

Once the other file has compared itself to the variables in the awk, i want to be able to feed in the two variables from the next line of the first file.

It is the loop and the feeding in of the file fields into thge variables that i am having some difficulty accomplishing - can anybody help me?

Regards

Matt Harrill
 
hi,

if you simply want to compare the 2 files why don't you use
sdiff i.e. sdiff -s file1 file2
 
tried it - no joy.

i know diff -e gives me a version i can use directly for output, but it also includes gibberish from the file.

i think my problem lies in the fact that i cant do a join on more than one field at a time - i basically have an audit file where a pc has the same id ( primary key ), but has moved location ( 5th field in on each line ). The pain is that the location is not unique. So what i am aiming to do is grab the pertinent fields from one file (e.g. the old location and current id) then compare them against the same fields from the other file. I think i am stuckl because i dont know the code in the shell script to seperate the fields, and also how to read in line by line in the shell script.

RaoulZa
 
since i can't quite tell what you want ... this might help ... if you could give examples of the 2 files going in, and and example of the expected output we could probably code it.

Code:
cat file1 | nawk ' {   getline compline < file2;   if (compline != $0)
  {     n1 = split($0,a);     n2 = split(compline,b);     if (n2 > n1)     {       n=n1;     } else {       n=n2;     }     for (i=1; i<n ; i++)     {       if (a[i] != b[i])       {         printf(&quot;%s\t!=%s\n&quot;,a[i],b[i]);       }     }   } }'
 
&quot;M14609&quot;,,&quot;GATLIFF ROAD&quot;,,&quot;NPO MGMENT&quot;,&quot;129.1.14.150&quot;,&quot;DCHP&quot;,,5/9/02 0:00:00,&quot;In
Test Room for PFW UAT.&quot;,&quot;T2-Type&quot;,,,0

this is the type of data i am using - comma delimited, this is all one line.

what i am looking for between the two files is where the id ( i.e. M14609 ) is the same, but the location ( i.e. NPO MGEMENT ) has changed to another location - i only want to print lines that satisfy that specific criteria, as we have lines missing and lines added ( new audit as opposed to the old audit )

does this help at all?

matt
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top