There are a few ways to prevent the entering of your password every time, but a couple of them aren't favoured by sysadmins for security reasons. So, I'll get those out of the way first...<br>
<br>
1) Ask the sysadmins on each server to add your server to the file /etc/hosts.equiv. If the host your connecting from is listed, and the login name you are using on your local machine and the remote machine is the same, you won't need to enter a password.<br>
<br>
2) In your home directory on each server, create a file called "<tt>.rhosts</tt>" In this file, enter the name of your local server that you commonly use. If the login name on your local server is different to the remote server, then put a tab after your local server name followed by your local server login name.<br>
<br>
For example, your usual server is "london" and the login is "mylogin". Remote server is "newyork" and the login is "remlogin". Login to "newyork" as "remlogin" and create "<tt>.rhosts</tt>" in your home directory. Add the following line:<br>
<tt><br>
london mylogin<br>
</tt><br>
<br>
This should then allow you to run <tt>rsh</tt> from "london" to "newyork" without entering your password.<br>
<br>
Warning: Your sysadmins may run a check on a regular basis that removes <tt>.rhosts</tt> files, as they can compromise security. Probably worth checking before creating <tt>.rhosts</tt> on 60 servers. ;^)<br>
<br>
3) Persuade your sysadmins to install <tt>ssh</tt> onto the servers. ssh uses a public key exchange between the server and client for authentication.<br>
<br>
However, there may be problems with installing this. I've heard of situations where the <tt>ssh</tt> packets flowing around the network can cause some servers to stop talking to the rest of the network. This would probably mean installing <tt>ssh</tt> onto 60 servers in one fell swoop... Not a difficult task, but not exactly trivial , either!

<br>
<br>
As far as the automatic part goes, you could ask the sysadmins to give you persmission to run cron on your local server ("london" in the previous example.). You could then set up a cron job to run a shell script, say, every five minutes to do the "ps" on each remote server.<br>
<br>
The shell script might look like the following. ("\" indicates that a line has been split for readability. Lines split in this way may be entered as one line in the script.)<br>
<tt><br>
#!/bin/sh<br>
MYLOGFILE=/path/to/log/file<br>
<br>
# Create a new logfile.<br>
>$MYLOGFILE<br>
<br>
# Loop around each server, checking for jobs run by me.<br>
for server in london newyork paris tokyo<br>
do<br>
echo "My processes on $server" >>$MYLOGFILE<br>
rsh $server -l my_login_name ps -fu my_login_name \<br>
>>$MYLOGFILE<br>
done<br>
</tt><br>
<br>
The crontab entry would then look like:<br>
<tt><br>
# Run my process log command every five minutes, between 8am<br>
# and 6pm.<br>
0,5,10,15,20,25,30,35,40,45,50,55 8-18 * * * /path/to/script<br>
</tt><br>
<br>
Doing it this way, $MYLOGFILE as named in the script will always show what is running on the servers up to 5 minutes ago. I'd suggest manually running the script first, to see how long it takes to run. If it takes more than 5 minutes, then make the interval in the crontab larger. eg, set the minutes to be "0,10,20,30,40,50".<br>
<br>
For more info on cron, look at "<tt>man cron</tt>" and "<tt>man crontab</tt>". There are also man pages on rhosts, hosts.equiv, rsh, rcp, and rlogin. If installed, there will also be man pages for ssh.<br>
<br>
Hope this helps