AnotherAlan
Technical User
Hi All,
I have to write something that will monitor a logfile in real time, check for network latency and email if these conditions are met. I have a working script that performs the conditional element (thanks to Feherke) but I am now struggling with the logic of how to ensure that only new events are raised and how to do the real time monitoring. I'm not a developer, only a frustrated admin who likes to write scripts, so apologies for my ignorance.
My problem:
My current script parses the entire logfile, so will raise duplicates.
It runs on an ad-hoc basis i.e. only when called by command line or cron, but I would like it to be monitoring the logs at all times.
My though patterns so far:
Use tail -f, I tried piping this into an awk statement but it didn't work...probably my fault.
Use a while loop, but hard to figure out a condition of when to break from the script and restart without missing some lines in the logs. i.e Tail -f piped to awk then into a log, watch for the log to be non-zero bytes, extract data and email.
Use fgrep and a "check" log to remove lines already parsed. I've used this before and it works, my worry is the size of these logs and the system overhead if the script is running continuously.
I've spent 48 hours toiling with this and am now going around in circles. Any pointers to put me back on track would be very much appreciated.
Thanks
Alan
I have to write something that will monitor a logfile in real time, check for network latency and email if these conditions are met. I have a working script that performs the conditional element (thanks to Feherke) but I am now struggling with the logic of how to ensure that only new events are raised and how to do the real time monitoring. I'm not a developer, only a frustrated admin who likes to write scripts, so apologies for my ignorance.
My problem:
My current script parses the entire logfile, so will raise duplicates.
It runs on an ad-hoc basis i.e. only when called by command line or cron, but I would like it to be monitoring the logs at all times.
My though patterns so far:
Use tail -f, I tried piping this into an awk statement but it didn't work...probably my fault.
Use a while loop, but hard to figure out a condition of when to break from the script and restart without missing some lines in the logs. i.e Tail -f piped to awk then into a log, watch for the log to be non-zero bytes, extract data and email.
Use fgrep and a "check" log to remove lines already parsed. I've used this before and it works, my worry is the size of these logs and the system overhead if the script is running continuously.
I've spent 48 hours toiling with this and am now going around in circles. Any pointers to put me back on track would be very much appreciated.
Thanks
Alan