Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations biv343 on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

How do I stop this script from stalling?

Status
Not open for further replies.

Birbone

MIS
Dec 20, 2000
141
US
I have multiple sessions of the following script command simultaneous running in the background on various files. And all running sessions should be appending their results into one log file.

tail –1 –f /dir1/dir2/dir3/$filename | while read RECORD
do
echo `date`”:”$RECORD >> LOGFILE
done

This script works well until a user physically enters one of these files in edit mode. At that time, all logging stops to the LOGFILE, but all sessions still show up as actively running.

How can I make this logging more fault tolerant so that all sessions continue logging?

P.S. Stopping the editing of these files in edit mode is not an option.

-B :cool:
birbone@earthlink.net
 

Try this:


cp /dir1/dir2/dir3/$filename /tmp/$$
tail –1 –f /tmp/$$ | while read RECORD
do
echo `date`”:”$RECORD >> LOGFILE
done
 
I appreciate your suggestion, but it won't work. This would just give me a copy that won't be growing so the tail -f option will never have anything new to log. And if I repeat the cp function at select intervals, then I would get a duplication of data within the log file and it would corrupt the time stamping of when events occured.

-B :cool:
birbone@earthlink.net
 
what do you mean by "At that time, all logging stops to the LOGFILE, but all sessions still show up as actively running."

Multiple sessions would generate multiple stamps for each new line being written to the files. but logging should continue.

A little more specific would help
 
The OP doesn't make it clear but I suspect that what's happening is that after user X opens file for edit, the logging process continues adding lines to the original file. Later, when user X saves the file, it saves an image of the log file as it was when he opened it (plus edits of course) -- any entries that were appended since then are lost.

If this accurately descibes the situation, you'll need to encapsulate the edit process to that the content in the edit buffer is separated from the working log, and then copied back afterwards in such a way that new entries are retained and appended to the edited content.
 
so the question comes down to if $filename is being modified, if yes, do not log, else only if changes made, log the new entries?
 
Since the Logfile is suppose to be continually growing as activity takes place, the time/date stamp should always be relatively current.
Twice I have done a "ll filename" and the time/date stamp is old even though activity is currently taking place. I then perform a ps -ef | grep various_filenames it shows all expected sessions as actively running. When I look in the Logfile it shows a record of what file was last manually edited and at what time. That record is the last entry in the Logfile, and the date/time stamp of that file being edited is the same as that of the Logfile.

So by stating that all logging stops, I mean that although multiple sessions should be continuing to update the Logfile, the log file is stale.


-B :cool:
birbone@earthlink.net
 
New research to add to the mix. This problem is not consistent. It is only occuring most of the time, but has occured with the vi and the tail commands being exicuted.

-B :cool:
birbone@earthlink.net
 
I tested on HP unix, the logging continues, but only if the modifier writes the new entry to the output file. i.e. in VI, logging does its job only you do :w or ZZ. So, what happens in the buffer between writing to files seems not matter. Suppose $filename looks
test1
test2

at this point, you did not output but decide to change test2 to test11, and then save it, then logging files looks like following:
Tue Apr 29 13:18:52 CDT 2003:test1
Tue Apr 29 13:18:53 CDT 2003:test11
 
Here's another thought. "tail -f" is intended for cases where a file is expected to be growing. If a user edits the file and then saves it while its still being appended to, effectively truncating the file, the tail command will get confused and be unable to put out anything else.

Here's another approach that might work for you.

#! /bin/ksh
lastvar=""
while 1
do
var=`tail –1 /dir1/dir2/dir3/$filename`
if [ "$var" != "$lastvar" ]; then
echo `date`”:”$var >> LOGFILE
lastvar="$var"
fi
sleep 1
done
 
Sampsonr, I'll try your suggestion for the next few days and let you know how it goes. Thanks.

-B :cool:
birbone@earthlink.net
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top