Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Missing Files in AIX 1

Status
Not open for further replies.

ronfarris

Programmer
Oct 25, 2000
30
US
Hi Everyone and Thank you all for your help so far:
I am having trouble with files disappearing off of my risc server. I have had this happen twice within' a month. Fortunately I had a backup copy of the one file and the other one I was able to get from another server. Does anyone know how this might be happening, and how I can check to see if someone other than myself may have deleted them? When files go corrupt do they sometimes change their name or turn up missing?
Thank you in advance for your help.
Ron
 
I guess you have a security problem, try setting auditing and accounting to discover the real facts...
I hope it works...
Unix was made by and for smart people.
 
What are the names of the files? Are they in a particular directory?

Do you have a cron job that deletes files older than a certain age?
Mike
michael.j.lacey@ntlworld.com
Email welcome if you're in a hurry or something -- but post in tek-tips as well please, and I will post my reply here as well.
 
We've had problems lie this before when housekeeping scripts were supposed to carry out deletions once a month from a directory. Unfortunately the script did not change to the correct directory owing to imbalanced parenthesis and it started from the wrong place (yes root!). Only a suggestion.
 
Hi Everyone and Thank you for your great advice!
I haven't found any crontab or housekeeping files that would do this. One file that disappeared is a script file that I rely on almost contantly and the one that disappeared just recently is an executeable file that the script file needs to run.
As far as setting auditing and accounting, does that use up a lot of resources. If so I have limited resources as the computer is constantly running a large number of peripherals. If I run a trace or almost anything the system locks.
Thank you again!
Have a nice day!
Ron
 
These files are in separate directories and I have never been in the executable files directory until the file turned up missing. I don't think it was my finger trouble anyway. Could have been someone else though.
 
Hi,

Since I have recently dealt with such a problem - here are 2 scripts that I have used to trace out any remove event (excluding NFS delete!).

First script creates binary trace file in loop:
======================================
#!/bin/ksh
#This script collects delete events from the local file systems,creates a temp. log trace.txt, that are sorted out of it the relevant unlink events by a trace_rpt.sh into a trace.$$ files .
# “trace1” is the original file , “trace” is the final that is sent to trace_rpt.sh for processing.
#Add both scripts to initab by :
# mkitab trace_loop:2:eek:nce:/scitex/version/scripts/trace_loop.sh
# mkitab trace_rpt:2:eek:nce:/scitex/version/scripts/trace_rpt.sh

TRACEDIR=”/dataVolumes/timna10.2.0/tracedir”
chmod 700 $TRACEDIR >/dev/null 2>&1

while true ;do
#echo "start: `date`"
trace -a –j 101,107 -s -T 10000000 -L 20000000 -o $TRACEDIR/trace1
#wait for trace to be over:
OVER=0
while [[ $OVER = 0 ]] ;do
sleep 1
ps -ef|grep trace |grep –vE “grep|trace_loop|trace_rpt” >/dev/null
OVER=$?
done
#echo "stop: `date`"
#copy to the final trace log to cause trace_rpt.sh to start processing it.
mv $TRACEDIR/trace1 $TRACEDIR/trace >/dev/null 2>&1
done
==============================
second script processes the date and creates small text files that contain the "unlink" (delete) events with it's process ID and path:
==============================
#!/bin/ksh
#This script sorts out the unlink events out of the “trace” file created in parallel by the trace_loop.sh into a trace.$$ files . Runs in loop
# “trace1” is the original file , “trace” is the final that is sent to trace_rpt.sh for processing,”trace.txt” holds all the “unlink” and “lookuppn” events,”trace2.txt” holds the line numbers for all the “unlink” events, “trace.$DATE1” is the final filtered file.

TRACEDIR=”/dataVolumes/timna1.2.0/tracedir”
HOSTNAME=`hostname`

while true ;do
if [[ -f $TRACEDIR/trace ]] ;then
#echo "start: `date`"
#Report to a temp. log grepping for relevant strings:
trcrpt -O exec=on -O pid=on $TRACEDIR/trace |grep –E “unlink|lookuppn|ELAPSED” > $TRACEDIR/trace.txt
DATE1=`date +%H%M%S%j`
#Filter out only the most relevant lines into the final log, grep for lines containing “unlink”:
grep -n unlink $TRACEDIR/trace.txt | awk -F: '{ print $1 }' > $TRACEDIR/trace2.txt
if [[ -s $TRACEDIR/trace2.txt ]] ;then
for LINENUMBER in `cat $TRACEDIR/trace2.txt` ;do
#print every “unlink” event with 3 succeeding lines into the final log, filter out any irrelevant filesystems”
tail -n +$LINENUMBER $TRACEDIR/trace.txt |head –4 |grep –vE “\/usr|\/var|\/scitex|\/sites” >> $TRACEDIR/trace.$DATE1
echo “---------------------------------------------------------“ >> $TRACEDIR/trace.$DATE1
done
#collect some more info into the log
echo “=====================================” >> $TRACEDIR/trace.$DATE1

echo “\nStations connected over IP:” >> $TRACEDIR/trace.$DATE1
echo “-----------------------------\n” >> $TRACEDIR/trace.$DATE1
arp –a >> $TRACEDIR/trace.$DATE1
fi
fi
rm $TRACEDIR/trace >/dev/null 2>&1
#echo "stop: `date`"
sleep 1
done
===========================





"Long live king Moshiach !"
h
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top