Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Perl - should I use array or hash of hashes? 1

Status
Not open for further replies.

netrookie

Technical User
Jul 6, 2003
29
US
Perl newbie here. I have some working code I strung up together with help from others along the way. I am still quite unclear of how I should approach this problem with using an array of hash, hash refs. Right now it works using hashes, but I was also advised I can use an array for thise too. As a newbie, I'm not sure which is better. The function of my script is fairly simple. I'm looking for best practices. Please advise and if you have any good examples I can use. Thank you!

Purpose of my script:
1) Mine through a server log file, get information on the date I put in either manually or systematically
2) For each date given, give me the backup set name
3) For every backup set name, grab either one of these variables if they appear (backup-size, backup-time, backup-status, or ERROR if its given for that backup set)
4) Generate a datafile with these values delimited in whatever format. This datafile will be used later as feed to another system.
5) Current issues:
My current script uses hashes, again from help. I'm still confused on hash of hashes and its use. I didn't go the array route, cause I'm not sure how to set that up.
6) Problems formatting the output in an ordered fashion

So, i'm looking at a structure like this:
server1:
MyDate (today's date)
--> MyBackupSet
--> Backup Attribute = Backup Value

server2:
MyDate (today's date)
--> MyBackupSet
--> Backup Attribute = Backup Value

...

Perl code:
Code:
use strict;
use warnings;
use File::Basename;
use Data::Dumper;

my %MyItems;
my $ARGV ="/var/log/server1.log";
my $mon = 'Aug';
my $day = '06';
my $year = '2010';

while (my $line = <>)
{
    chomp $line;
    print "Line: $line\n" if debug;

    if ($line =~ m/(.* $mon $day) \d{2}:\d{2}:\d{2} $year: ([^:]+):backup:/)
    {

        my $server = basename $ARGV, '.log';
        my $BckupDate="$1 $year";
        my $BckupSet =$2;
        print "$BckupDate ($BckupSet): " if debug;

        $MyItems{$server}{$BckupSet}->{'MyLogdate'} = $BckupDate;
        $MyItems{$server}{$BckupSet}->{'MyDataset'} = $BckupSet;
        $MyItems{$server}{$BckupSet}->{'MyHost'} = $server;
        #$MyItems{$server}{$BckupSet}->{'MyServer'} = $server; 

        if ($line =~ m/(ERROR|backup-size|backup-time|backup-status)[:=](.+)/)
        {
            my $BckupKey=$1;
            my $BckupVal=$2;
            $MyItems{$server}{$BckupSet}->{$BckupKey} = $BckupVal;
            print "$BckupKey=$BckupVal\n" if debug;
        }
    }
}

print Dumper(%MyItems);
Output from Dumper:
Code:
$VAR1 = 'server1';
$VAR2 = {
          'abc1.mil.mad' => {
                                 'ERROR' => ' If you are sure  is not running, please remove the file and restart ',
                                 'MyLogdate' => 'Fri Aug 06 2010',
                                 'MyHost' => 'server1',
                                 'MyDataset' => 'abc1.mil.mad'
                               },
          'abc2.cfl.mil.mad' => {
                                  'backup-size' => '187.24 GB',
                                  'MyLogdate' => 'Fri Aug 06 2010',
                                  'MyHost' => 'server1',
                                  'backup-status' => 'Backup succeeded',
                                  'backup-time' => '01:54:27',
                                  'MyDataset' => 'abc2.cfl.mil.mad'
                                },
          'abc3.mil.mad' => {
                                'backup-size' => '46.07 GB',
                                'MyLogdate' => 'Fri Aug 06 2010',
                                'MyHost' => 'server1',
                                'backup-status' => 'Backup succeeded',
                                'backup-time' => '00:41:06',
                                'MyDataset' => 'abc3.mil.mad'
                              },
          'abc4.mad_lvm' => {
                                'backup-size' => '422.99 GB',
                                'MyLogdate' => 'Fri Aug 06 2010',
                                'MyHost' => 'server1',
                                'backup-status' => 'Backup succeeded',
                                'backup-time' => '04:48:50',
                                'MyDataset' => 'abc4.mad_lvm'
                              }
        };

Sample log file used to create this datafile:
Code:
Fri Aug 06 00:00:04 2010: abc2.cfl.mil.mad:backup:INFO: START OF BACKUP
Fri Aug 06 00:00:04 2010: abc2.cfl.mil.mad:backup:INFO: PHASE START: Initialization
Fri Aug 06 00:00:04 2010: abc2.cfl.mil.mad:backup:WARNING: Binary logging is off.
Fri Aug 06 00:00:04 2010: abc2.cfl.mil.mad:backup:INFO: License check successful
Fri Aug 06 00:00:04 2010: abc2.cfl.mil.mad:backup:INFO: License check successful for lvm-snapshot.pl
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: backup-set=abc2.cfl.mil.mad
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: backup-date=20100806000004
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: abcsql-server-os=Linux/Unix
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: backup-type=regular
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: host=abc2.cfl.mil.mad.melster.com
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: backup-date-epoch=1281078004
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: retention-policy=3D
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: sql-abc-version=ZRM for MySQL Enterprise Edition - version 3.1
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: abcsql-version=5.1.32-Melster-SMP-log
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: backup-directory=/home/backups/abc2.cfl.mil.mad/20100806000004
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: backup-level=0
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: backup-mode=raw
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: PHASE END: Initialization
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: PHASE START: Running pre backup plugin
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: PHASE END: Running pre backup plugin
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: PHASE START: Flushing logs
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: PHASE END: Flushing logs
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: PHASE START: Creating snapshot based backup
Fri Aug 06 00:00:10 2010: abc2.cfl.mil.mad:backup:INFO: Fri Aug 06 00:48:48 2010: abc4.mad_lvm:backup:INFO: raw-databases-snapshot=test abcsql sgl
Fri Aug 06 00:48:51 2010: abc4.mad_lvm:backup:INFO: PHASE END: Creating snapshot based backup
Fri Aug 06 00:48:51 2010: abc4.mad_lvm:backup:INFO: PHASE START: Calculating backup size & checksums
Fri Aug 06 00:48:54 2010: abc4.mad_lvm:backup:INFO: last-backup=/home/backups/abc4.mad_lvm/20100804200003
Fri Aug 06 00:48:54 2010: abc4.mad_lvm:backup:INFO: backup-size=422.99 GB
Fri Aug 06 00:48:54 2010: abc4.mad_lvm:backup:INFO: PHASE END: Calculating backup size & checksums
Fri Aug 06 00:48:54 2010: abc4.mad_lvm:backup:INFO: read-locks-time=00:00:04
Fri Aug 06 00:48:54 2010: abc4.mad_lvm:backup:INFO: flush-logs-time=00:00:00
Fri Aug 06 00:48:54 2010: abc4.mad_lvm:backup:INFO: backup-time=04:48:50
Fri Aug 06 00:48:54 2010: abc4.mad_lvm:backup:INFO: backup-status=Backup succeeded
Fri Aug 06 00:48:54 2010: abc4.mad_lvm:backup:INFO: Backup succeeded
Fri Aug 06 00:48:54 2010: abc4.mad_lvm:backup:INFO: PHASE START: Running post backup plugin
Fri Aug 06 00:48:55 2010: abc4.mad_lvm:backup:INFO: PHASE END: Running post backup plugin
Fri Aug 06 00:48:55 2010: abc4.mad_lvm:backup:INFO: PHASE START: Cleanup
Fri Aug 06 00:48:55 2010: abc4.mad_lvm:backup:INFO: PHASE END: Cleanup
Fri Aug 06 00:48:55 2010: abc4.mad_lvm:backup:INFO: END OF BACKUP

Format I would like to try and create if possible (datafile):
Code:
MyHost=>server1;MyLogdate=>Fri Aug 06 2010;MyDataset=>abc2.cfl.mil.mad;backup-time=>Fri Aug 06 2010;backup-status=>Backup succeeded
 
A hash seems definitely the way to go, as you are collecting different data in different lines for the same [tt]BkupSet[/tt].
However:
-you don't need the first hash key [tt]$server[/tt] (nor the field [tt]MyHost[/tt]) as this is constant throughout each program run
-also the field [tt]MyLogdate[/tt] is constant
-you can use an array for the content of the hash under the key [tt]BkupSet[/tt] , as you have a fixed set of data to be collected. You would define here that, e.g., the array element [0] would be for [tt]MyLogdate[/tt] (but see above), element [1] for [tt]backup-time[/tt] , and so on. This would be faster, but if you don't have files with millions of lines, your code is OK: it is more readable and simpler in extending to other data elements.
The output with your data structure would simply be (no sorting and untested and with an additional semicolon at EOL):
Code:
for $server(keys%MyItems){
  for $BckupSet(keys%{$MyItems{$server}}){
    for(keys%{$MyItems{$server}{$BckupSet}}){
      print$_,'=>',$MyItems{$server}{$BckupSet}{$_},';';
    }
    print"\n";
  }
}


Franco
: Online engineering calculations
: Magnetic brakes for fun rides
: Air bearing pads
 
@Franco - Thank you! This is great. You basically nailed what I've been trying to figure out as in understanding the structure of this. Yes, the $server has to be constant since I am mining all the backup values for each given log from $server. I added that key for that and wasn't sure if that was the best thing to do. You are right about LogDate as well.

Would you mind showing me how to construct this with an array as you have pointed?

As for your print code. This works great. I had to add "my" to get it to work.

Also one other I question. As you can see, I was using $ARGV to pass in server1.log

If I want to go through a list of server logs. What is the best way to do that? Is the way to use $ARGV, would be to point it to a text file of server file log names in a separate file like so?

"prodlogs.txt"
Code:
/var/log/server1.log
/var/log/server2.log
/var/log/server3.log
 
Why not just put the list of log files on the command-line for your script and loop through @ARGV to process them?

Code:
/path/to/analyse_logs.pl /var/log/server*.log


Annihilannic.
 
@Annihilannic - Thanks for the suggestion. I am using @ARGV for this.
 
This is how I would do it for efficiency (but not readability) purposes:
Code:
use strict;
use warnings;
my%MyItems;
my$server='server1';
my$monday='Aug 06';
my$year='2010';
my$BckupDate="$monday $year";
my$rey=qr/$year: /;
my$rei=qr/:backup:/;
my$reinfo=qr/^INFO: (.+)$/;
my$re0=qr/^backup-status=(.+)$/;
my$re1=qr/^backup-time=(.+)$/;
my$re2=qr/^backup-size=(.+)$/;
my$re3=qr/^ERROR: (.+)$/;
my($info,$BckupSet,$ref);
while(<DATA>){
  if(index($_,$monday)==4){
    (undef,$info)=split$rey;
    if($info){
      ($BckupSet,$info)=split$rei,$info;
      if($info=~$reinfo){
        for($1){
          /$re0/&&($MyItems{$BckupSet}[0]=$1,last);
          /$re1/&&($MyItems{$BckupSet}[1]=$1,last);
          /$re2/&&($MyItems{$BckupSet}[2]=$1,last);
          /$re3/&&($MyItems{$BckupSet}[3]=$1,last);
        }
      }
    }
  }
}
for(keys%MyItems){
  print'MyHost=>',$server,';MyLogdate=>',$BckupDate,';MyDataset=>',$_,';';
  $ref=$MyItems{$BckupSet};
  print'backup-time=>',$$ref[1],';'if$$ref[1];
  print'backup-size=>',$$ref[2],';'if$$ref[2];
  print'backup-status=>',$$ref[0],';'if$$ref[0];
  print'ERROR=>',$$ref[3],';'if$$ref[3];
  print"\n";
}
__DATA__
Fri Aug 06 00:00:04 2010: abc2.cfl.mil.mad:backup:INFO: START OF BACKUP
Fri Aug 06 00:00:04 2010: abc2.cfl.mil.mad:backup:INFO: PHASE START: Initialization
Fri Aug 06 00:00:04 2010: abc2.cfl.mil.mad:backup:WARNING: Binary logging is off.
Fri Aug 06 00:00:04 2010: abc2.cfl.mil.mad:backup:INFO: License check successful
Fri Aug 06 00:00:04 2010: abc2.cfl.mil.mad:backup:INFO: License check successful for lvm-snapshot.pl
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: backup-set=abc2.cfl.mil.mad
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: backup-date=20100806000004
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: abcsql-server-os=Linux/Unix
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: backup-type=regular
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: host=abc2.cfl.mil.mad.melster.com
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: backup-date-epoch=1281078004
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: retention-policy=3D
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: sql-abc-version=ZRM for MySQL Enterprise Edition - version 3.1
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: abcsql-version=5.1.32-Melster-SMP-log
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: backup-directory=/home/backups/abc2.cfl.mil.mad/20100806000004
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: backup-level=0
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: backup-mode=raw
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: PHASE END: Initialization
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: PHASE START: Running pre backup plugin
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: PHASE END: Running pre backup plugin
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: PHASE START: Flushing logs
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: PHASE END: Flushing logs
Fri Aug 06 00:00:05 2010: abc2.cfl.mil.mad:backup:INFO: PHASE START: Creating snapshot based backup
Fri Aug 06 00:00:10 2010: abc2.cfl.mil.mad:backup:INFO: 
Fri Aug 06 00:48:48 2010: abc4.mad_lvm:backup:INFO: raw-databases-snapshot=test abcsql sgl
Fri Aug 06 00:48:51 2010: abc4.mad_lvm:backup:INFO: PHASE END: Creating snapshot based backup
Fri Aug 06 00:48:51 2010: abc4.mad_lvm:backup:INFO: PHASE START: Calculating backup size & checksums
Fri Aug 06 00:48:54 2010: abc4.mad_lvm:backup:INFO: last-backup=/home/backups/abc4.mad_lvm/20100804200003
Fri Aug 06 00:48:54 2010: abc4.mad_lvm:backup:INFO: backup-size=422.99 GB
Fri Aug 06 00:48:54 2010: abc4.mad_lvm:backup:INFO: PHASE END: Calculating backup size & checksums
Fri Aug 06 00:48:54 2010: abc4.mad_lvm:backup:INFO: read-locks-time=00:00:04
Fri Aug 06 00:48:54 2010: abc4.mad_lvm:backup:INFO: flush-logs-time=00:00:00
Fri Aug 06 00:48:54 2010: abc4.mad_lvm:backup:INFO: backup-time=04:48:50
Fri Aug 06 00:48:54 2010: abc4.mad_lvm:backup:INFO: backup-status=Backup succeeded
Fri Aug 06 00:48:54 2010: abc4.mad_lvm:backup:INFO: Backup succeeded
Fri Aug 06 00:48:54 2010: abc4.mad_lvm:backup:INFO: PHASE START: Running post backup plugin
Fri Aug 06 00:48:55 2010: abc4.mad_lvm:backup:INFO: PHASE END: Running post backup plugin
Fri Aug 06 00:48:55 2010: abc4.mad_lvm:backup:INFO: PHASE START: Cleanup
Fri Aug 06 00:48:55 2010: abc4.mad_lvm:backup:INFO: PHASE END: Cleanup
Fri Aug 06 00:48:55 2010: abc4.mad_lvm:backup:INFO: END OF BACKUP
Output:
Code:
MyHost=>server1;MyLogdate=>Aug 06 2010;MyDataset=>abc4.mad_lvm;backup-time=>04:4
8:50;backup-size=>422.99 GB;backup-status=>Backup succeeded;

Franco
: Online engineering calculations
: Magnetic brakes for fun rides
: Air bearing pads
 
You really don't like to waste bytes on white space, do you Franco! :)

Annihilannic.
 
Arrays are best when you have a "spreadsheet's" or "table's" worth of information. Or a mini database. Hahses are better when the data is more random or

Say you wanted to store in your program (or read in from a file) a bunch of info about you're clients. Name, phone, address, balance due, whatever. Since hashes are only key-value pairs, you wouldn't be able to use a simple hash to store this info. You could use several hashes (one for each column) and some here would probably recommend that... I find it best in this case to use a two dimensional array.

With arrays, you pretty much have to use a foreach loop to find what you want, and then use that instance in the loop to do what you want to do. Hashes can be faster if you can jump right to what you want.

Example with a 2D array:

Code:
$search=#however you get your search value

for $i (0 .. $#array)
{
  if ($array[$i][0] =~ "$search")
  {
    my $address  =$array[$i][1]; #This row, 2nd column
    my $phoneNum =$array[$i][2];
    my $balanceD =$array[$i][3];
    #etc...
    if ($balanceD gt 0)
    {
      print "$array[$i][0] at $address owes $balanceD\n";
    }
  }
}
 
@xhonzi - thanks for your input.

As for my data. Its not random. Although for every backup set I have, I am looking for particular attributes. But on occasion for some rows, the value will not be there.

What I would like to do is create as csv type datafile with this if possible.

Thanks.
 
You CAN have NULL "cells" in your array, so you don't need to be too worried about that.

If I were creating a .csv, I would personally go with a 2D array. But, like I said, I'm sure there are others here that would argue multiple hashes to be superior.
 
Another plus for arrays... they stay in the order you expect. Hashes may not.

Annihilannic.
 
My Perl friends!
Thank you for your recent help on this parse log script. There are some changes I need to make on the logic of my code. Instead of filtering on date, I need to somehow keep track of the last record I parsed. I am basically pulling a copy of the log file on an hourly basis and new records are added.

What I need to do is alter/change this logic to read not on the given date, but on the last time I read the log entry.

For example, if the backup started on the night before (Aug 15 20:00 and continues to Aug 16 00:33), I would not capture (Aug 15 20:00) entry since my script is only set to read the data for the 16th, thus the need to change filtering on just date.

Can you provide a solution or adjustment in my code to do handle this? I'm thinking of something like creating a bookmark..i'm not sure. Is there an easier way?

[small]
Sun Aug 15 20:00:03 2010: backup.set2_lvm:backup:INFO: START OF BACKUP
Sun Aug 15 20:00:04 2010: backup.set2_lvm:backup:INFO: backup-set=backup.set2_lvm
Sun Aug 15 20:00:04 2010: backup.set2_lvm:backup:INFO: backup-date=20100815200003
Sun Aug 15 20:00:04 2010: backup.set2_lvm:backup:INFO: backup-type=regular
Sun Aug 15 20:00:04 2010: backup.set2_lvm:backup:INFO: backup-date-epoch=1281927603
Sun Aug 15 20:00:04 2010: backup.set2_lvm:backup:INFO: backup-directory=/home/backups/backup.set2_lvm/20100815200003
Mon Aug 16 00:00:04 2010: backup.set1_lvm:backup:INFO: START OF BACKUP
Mon Aug 16 00:00:05 2010: backup.set1_lvm:backup:INFO: backup-set=backup.set1_lvm
Mon Aug 16 00:00:05 2010: backup.set1_lvm:backup:INFO: backup-date=20100816000003
Mon Aug 16 00:00:05 2010: backup.set1_lvm:backup:INFO: backup-type=regular
Mon Aug 16 00:00:05 2010: backup.set1_lvm:backup:INFO: backup-date-epoch=1281942003
Mon Aug 16 00:33:15 2010: backup.set2_lvm_lvm:backup:INFO: last-backup=/home/backups/backup.set2_lvm_lvm/20100814200003
Mon Aug 16 00:33:15 2010: backup.set2_lvm_lvm:backup:INFO: backup-size=424.53 GB
Mon Aug 16 00:33:15 2010: backup.set2_lvm_lvm:backup:INFO: backup-time=04:33:12
Mon Aug 16 00:33:15 2010: backup.set2_lvm_lvm:backup:INFO: backup-status=Backup succeeded
Mon Aug 16 00:33:15 2010: backup.set2_lvm_lvm:backup:INFO: Backup succeeded
Mon Aug 16 00:33:16 2010: backup.set2_lvm_lvm:backup:INFO: END OF BACKUP
Mon Aug 16 01:59:07 2010: backup.set1_lvm:backup:INFO: last-backup=/home/backups/backup.set1_lvm/20100815000006
Mon Aug 16 01:59:07 2010: backup.set1_lvm:backup:INFO: backup-size=187.24 GB
Mon Aug 16 01:59:07 2010: backup.set1_lvm:backup:INFO: backup-time=01:59:04
Mon Aug 16 01:59:07 2010: backup.set1_lvm:backup:INFO: backup-status=Backup succeeded
Mon Aug 16 01:59:07 2010: backup.set1_lvm:backup:INFO: Backup succeeded
Mon Aug 16 01:59:09 2010: backup.set1_lvm:backup:INFO: END OF BACKUP
[/small]

Here's my code now: (thanks to your help)
Code:
use strict;
use warnings;
use File::Basename;
use Data::Dumper;

my ($ServerName)=@ARGV; #ARGV = /var/log/server1.mydomain.com.backup-software.log
my %MyItems;
my $mon;
my $day;
my $year;

foreach my $ServerName(@ARGV){
  while (my $line = <>){
    chomp $line;
    print "Line: $line\n" if debug;

    sub spGetCurrentDateTime;
     ($mon, $day, $year) = spGetCurrentDateTime;
     ($mon, $day, $year) = split(" ", $mon);

    if ($line =~ m/(.* $mon $day) \d{2}:\d{2}:\d{2} $year: ([^:]+):backup:/){
      my $ServerName = basename $ARGV, '.mydomain.com.backup-software.log';
      my $BckupDate="$1 $year";
      my $BckupSet =$2;
      $MyItems{$ServerName}{$BckupSet}->{'1-Server'}  = $ServerName;
      $MyItems{$ServerName}{$BckupSet}->{'2-Logdate'} = $BckupDate;
      $MyItems{$ServerName}{$BckupSet}->{'3-BackupSet'} = $BckupSet;

      if ($line =~ m/.* \w+ \d{2} (\d{2}:\d{2}:\d{2}) \d{4}: ([^:]+):backup:.*(START OF BACKUP)/){
          my $BckupKey=$2;
          my $BckupVal=$1;
          $MyItems{$ServerName}{$BckupSet}->{'4-StartTime'} = $BckupVal;
      }
      if ($line =~ m/(backup-time)[:=](.+)/){
          my $BckupKey="5-Duration";
          my $BckupVal=$2;
          $MyItems{$ServerName}{$BckupSet}->{$BckupKey} = $BckupVal;
      }
      if ($line =~ m/(backup-size)[:=](.+)/){
          my $BckupKey="6-Size";
          #my $BckupKey=$1;
          my $BckupVal=$2;
          $MyItems{$ServerName}{$BckupSet}->{$BckupKey} = $BckupVal;
      }
      if ($line =~ m/(Backup succeeded)/){
          my $BckupKey="7-Status";
          my $BckupVal="Succeeded";
          $MyItems{$ServerName}{$BckupSet}->{$BckupKey} = $BckupVal;
      }
      if ($line =~ m/(ERROR)[:=](.+)/){
          my $BckupKey="8-Status";
          my $BckupVal="Unsuccessful";
           $MyItems{$ServerName}{$BckupSet}->{$BckupKey} = $BckupVal;
      }
    }
  }
  sub spGetCurrentDateTime{
    my ($sec, $min, $hour, $mday, $mon, $year) = localtime();
    my @abbr = qw( Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec );
    my $currentDateTime = sprintf "%s %02d %4d", $abbr[$mon], $mday, $year+1900; #Returns => 'Jul 26 2010'
    return $currentDateTime;
  }
   #print Dumper(\%MyItems);
  for my $ServerName(keys%MyItems){
    for my $BckupSet(keys%{$MyItems{$ServerName}}){
      for(sort keys%{$MyItems{$ServerName}{$BckupSet}}){
        print$_,'=',$MyItems{$ServerName}{$BckupSet}{$_},';';
      }
      print"\n";
    }
  }
}
 
You could record the last line number you processeed ($. variable, if I recall)... but if the log is rotated (which is hopefully the case) then that probably won't be much use.

I'd just save the entire last line that you processed somewhere, and next time around read through the file up to the matching line before you start your processing again.

Annihilannic.
 
Hi, Annihilannic. When you say, log rotated, you mean like the log is renamed? or refreshed? I have a separate script that is pulling the logfile remotely. I am maintaining the same name, but the next it copies could be a file with new records. So the file would change in that sense.
 
For long or continuously running services/daemons it is common to "rotate" the log files to prevent them from eventually fillling up your filesystem.

Sometimes the files are simply renamed once a week, keeping a certain number:

Code:
myservice.log.2 -> myservice.log.3
myservice.log.1 -> myservice.log.2
myservice.log -> myservice.log.1
create new empty myservice.log

Or sometimes a certain number of lines of the log are retained and the older ones discarded. And sometimes the older logs are also compressed.

Frequently this is done using a tool like "logrotate".

Annihilannic.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top