Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

How do I list all cron jobs scheduled

Status
Not open for further replies.

TeleBOYWONDER

IS-IT--Management
Nov 8, 2002
82
0
0
US
Hi (I posted this on the Solaris site too...sorry)

I need to list all cron jobs for a specific time frame like today between 2pm and 3pm?
Basically I have a script(somewhere) running that adds a menu to an appliaction running on the box. However when the secondary menu is selected and one of the options is to run a report, the job dies:

"Creating Reports Please Wait............................................
....
Sending report(s) to server(s).
/export/home/pserv/k12/january[20]: 3057 Killed
06/22/04"

 
The command needed to viewing the list of cron jobs depends on what user you are logged in as, and what user the cron jobs are processed under. You should refer to the manual pages.

man crontab

Your application may be designed in such a way that it only works through a tty device, in which case you would not be able to run it directly from cron.

It could also be that your cron is running a second phase of your process before a first phase is completed, which could cause the first phase to abort prematurely. This might happen if you have a separate cron entry for each phase rather than calling a script that runs the separate phases sequentially.
 
Well I am logged in as root so shouldn't all cron jobs be available?
 
So I found the jobs now I am wondering if the script is saving the file then moving it or writing over the file each time it runs.
It seems to be writing over the file but the last half of the command lines is lost on me:

55 * * * * /export/home/pserv/k12_red/check_rta >/dev/null 2>&1

#35 * * * * /export/home/pserv/k12_pacs/check_rta >/dev/null 2>&1

Again thanks in advance.
 
jhinckley

"crontab -l" only shows jobs for the current user, not all users.

Technically if you want to see all jobs for all users you can get that information with the following command from root:

Code:
more /usr/spool/cron/crontabs/*



TeleBOYWONDER

Whether it is overwriting the file depends on the contents of the script. Unix does not make backups before overwriting files, so if your script does not make a backup copy itself, then yes it is overwriting the file.

Your cronline is running the script "/export/home/pserv/k12_pacs/check_rta" on the 55th minute of each hour of each day. The ">/dev/null 2>&1" part of the line just redirects output to the null device, and likely has nothing to do with the script process itself. You need to look at the code of the script file to see what it is actually doing.
 
why not move to that area and list the different crontabs. All are text files that can be listed with cat or more.



Ed Fair
Give the wrong symptoms, get the wrong solutions.
 
apeasecpc,
So where is the output being directed? I mean what is the null device(it's not a printer obviously...right)?

More than the error I just want to find the cron job, and figure out where the contents sent and whether it writes over the output each time it runs.

Thanks guys and ladies too.
 
The null device is in essense a data trash can where you can send status and error messages from a program if you don't want to see them or retain them. It is common for the output of cron jobs to be redirected to null because they are run in the background and do not normally require any user intervention, so the status and error messages do not need to be saved.

What you most likely have is a unix script that generates a report and saves it at a location specified within the script. The redirection to null on the cron job probably does not not have any bearing on where your report is being saved.

What you need to do is look at the contents of the file "/export/home/pserv/k12_pacs/check_rta". Assuming the file is a unix script and not an executable, you should be able to determine where your report is being saved from the contained script code.

The answer to your question is contained in the script file. If you need help deciphering it, post the contents to this forum.
 
So there are two scripts files I'm interested in knowing the process being run and where the data is sent:

load functions and configuration
DIR=`dirname $0`
. $DIR/functions
usage()
{
echo "This is used internally."
echo "$0 Process-ID.mon.day.yr2.starttime.ACD"
exit 1
}
verify_transfer()
{
DATE_STRING=$(date '+%D')
TIME_STRING=$(date '+%H:%M:%S')
awk '
$1 ~ /^530/ {
printf("'$DATE_STRING' - '$TIME_STRING' - FTP Login Incorrect on LAN Conn
ection '$j'\n")
}
$1 ~ /^550/ {
printf("'$DATE_STRING' - '$TIME_STRING' - Access denied to file or directo
ry for LAN connection '$j'\n")
}
$1 ~ /^553/ {
printf("'$DATE_STRING' - '$TIME_STRING' - Cannot '${APPN[$j]}' '$FIXED_FN
AME' for LAN connection '$j'\n")
}
$1 ~ /^226/ {
printf("'$DATE_STRING' - '$TIME_STRING' - Sent '$FIXED_FNAME' to LAN conn
ection '$j'\n")
}
' /tmp/$$.transport.log >> $PKGHOME/${NAME}.log
echo "" >> $PKGHOME/transport.log
date >> $PKGHOME/transport.log
cat /tmp/$$.transport.log >> $PKGHOME/transport.log
rm /tmp/$$.transport.log
}
lan()
{
j=1
while ((j<=LAN_CON))
do
for k in ${ACD[j]}
do
if ((ACDNUM == $k)); then
FIXED_FNAME=$(echo ${NAME[j]} | sed ' s/$mon/'$MON'/g;
s/$day/'$DAY'/g;
s/$yr2/'$YEAR2'/g;
s/$hour/'$HOUR'/g;
s/$min/'$MIN'/g;
s/$intr/'$INTR'/g;
s/$acd/'$k'/g;')
echo "verbose" > /tmp/${NAME}.$$
echo "open ${DEST[j]}" >> /tmp/${NAME}.$$
echo "user ${USER[j]} ${PASS[j]}" >> /tmp/${NAME}.$$
echo "cd \"${DIR[j]}\"" >> /tmp/${NAME}.$$
echo "pwd" >> /tmp/${NAME}.$$
if [ "${APPN[j]}" == "APPEND" ] ; then
echo "append $1 \"$FIXED_FNAME\"" >> /tmp/${NAME}.$$
else
echo "put $1 \"$FIXED_FNAME\"" >> /tmp/${NAME}.$$
fi
echo "close ${DEST[j]}" >> /tmp/${NAME}.$$
echo "quit" >> /tmp/${NAME}.$$
chmod +x /tmp/${NAME}.$$
# added code here to kill hung FTP processes - KPA 11/18/02
childtime=300 # timeout value in seconds for each FTP
# Timer - a child process that will kill this process if their Microsoft
# FTP server hangs the FTP
sh -c "sleep $childtime; echo \`date\` waited too long, killing ftp >> $PKGHOME/
${NAME}.log; kill -9 $$ >/dev/null 2>&1; rm -f /tmp/$PPID*; rm -f /tmp/*$PPID; r
m -f /tmp/$$*; rm -f /tmp/*$$" &
# save the child PID so we can kill it if we FTP normally
childpid=$!
# initiate the FTP script
ftp -n < /tmp/${NAME}.$$ >> /tmp/$$.transport.log
# if we get here, FTP ran - stop the child timer process
kill -9 $childpid >/dev/null 2>&1
rm /tmp/${NAME}.$$
verify_transfer
fi
done
((j+=1))
done
}
ldc()
{
j=1
while ((j<=LDC_CON))
do
for k in ${LOCAL_ACD[j]}
do
if ((ACDNUM == $k)); then
FIXED_FNAME=$(echo ${LOCAL_FNAME[j]} | sed ' s/$mon/'$MON'/g;
s/$day/'$DAY'/g;
s/$yr2/'$YEAR2'/g;
s/$hour/'$HOUR'/g;
s/$min/'$MIN'/g;
s/$intr/'$INTR'/g;
s/$acd/'$k'/g;')
if [ -d "${LOCAL_DIR[j]}" ] ; then
case "${LOCAL_OVR[j]}" in
"YES")
cat $1 > "${LOCAL_DIR[j]}/$FIXED_FNAME"
;;
"NO")
cat $1 >> "${LOCAL_DIR[j]}/$FIXED_FNAME"
;;
esac
echo "$(date '+%D - %H:%M:%S ')- Copied $FIXED_FNAME for LDC connect
ion $j" >> $PKGHOME/${NAME}.log
else
echo "$(date '+%D - %H:%M:%S ')- ERROR Cannot write $FIXED_FNAME to
${LOCAL_DIR[j]} for LDC connection $j" >> $PKGHOME/${NAME}.log
exit 1
fi
fi
done
((j+=1))
done
}
# Start executing script from here
if (($# < 1)) ; then
usage $0
fi
TEMP=$(ls $REPORTDIR/$1*)
if [ -n "$(echo $TEMP | grep "no such")" ] ; then
exit 1
fi
for file in $TEMP
do
ID=$(echo $file | awk 'BEGIN { FS="." } { print $1 }')
MON=$(echo $file | awk 'BEGIN { FS="." } { print $2 }')
DAY=$(echo $file | awk 'BEGIN { FS="." } { print $3 }')
YEAR2=$(echo $file | awk 'BEGIN { FS="." } { print $4 }')
INTR=$(echo $file | awk 'BEGIN { FS="." } { print $5 }')
HOUR=$(echo $file | awk 'BEGIN { FS="." } { $5 = substr($5, 1, 2); print $5 }'
)
MIN=$(echo $file | awk 'BEGIN { FS="." } { $5 = substr($5, 3, 2); print $5 }')
ACDNUM=$(echo $file | awk 'BEGIN { FS="." } { print $6 }')
lan $file
ldc $file
rm $file
sleep 5
done
cleanup 0
 
The script appears to be updating some log files in $PKGHOME, which I'm assuming is a path stored in an environmental variable. The script is using the append redirection ">>" so it should be adding to the existing log files and not overwriting anything.

To see what $PKGHOME points to, from a root login type the following commands, and look at the output screens until you see a reference to PKGHOME:

env | more
set | more

If that doesn't work, try doing a search for the file "transport.log", which is the name of one of the log files:

find / -name transport.log -print

Once you have determined where $PKGHOME points, look at the ".log" files in that folder.
 
How do I temporarily stop the cron job, but NOT delete it?

Thanks.
 
Make a backup of your current cron:
Warning!!! Don't forget the "-l" or you will erase your existing cron!!!

Code:
crontab -l >{cronbackupname}

Copy the backup to a temporary copy:

Code:
cp {cronbackupname} {crontempname}

Edit the temporary copy with vi or the editor of your choice. Remove or put a "#" remark in front of the line you wish to turn off.

activated the altered cron:

Code:
crontab {crontempname}


To restore the previous cron to active, just activate it the same way:

Code:
crontab {cronbackupname}
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top