So here's the deal:
I'm trying to make this Perl script that will be able to test UDP port availability on a remote server. It does this by logging into the remote server over SSH to run tcpdump and watch for port 5060 and its local hostname, and then it opens a socket to the remote host and sends packets and sees how many have arrived.
This script existed before I got to it and it used fork() to spawn off a new thread to run the SSH command, and then sent the packets in the main thread. I had to port this previously stand-alone program into a Perl module so we could use it on our web-based product, and so I had to replace the fork() with threads because you don't wanna use fork() where Apache is involved.
It worked fine using threads but then we had a need for these remote servers to come to our control panel server to request a CGI front-end to this module, to see how their ports are doing (so that the remote server owner could set up alerts in case ports stop being available). For this we were going to use lighttpd and FastCGI, but ran into a rather bizarre problem.
Here's kinda how the code in the module works:
Anyway, in lighttpd and FastCGI the threads seem to throw things off. Adding a lot of prints shows that the thread starts and rejoins properly, and it will do this for all 3 ports I send into the module to test, but as soon as the last port finishes being tested and that foreach block goes out of scope, the program mysteriously exits. No error messages, it just stops, as though an exit() was there.
So, I've been trying to get away from using threads. Here are my test scripts:
Ultimately, tcpdump.pl in the end would be wrapped inside a C program that is setuid to root, since root is the only user that can SSH from server to server without passwords. But these two lines:
is where the problem is. The first line works. As soon as packets come in, the SSH handle can be read from and printed to the terminal, which means the test.pl's SSH handle can get this line as well. It's line buffered like it should be.
For some reason though, the second line, which checks for the hostname to watch packets for, isn't line buffered. It won't return the packets until the process is killed, and if it's killed, then for some reason the filehandle returns EOF immediately and nothing can be read from it.
This command that checks on the hostname is the same as the command that was run in the original threaded model, and it has the -l option which should line buffer the output, but for some reason it won't line buffer.
Does anybody know why? Or, does anybody know if lighttpd+FastCGI is known to have a problem with Perl threads?
-------------
Cuvou.com | My personal homepage
I'm trying to make this Perl script that will be able to test UDP port availability on a remote server. It does this by logging into the remote server over SSH to run tcpdump and watch for port 5060 and its local hostname, and then it opens a socket to the remote host and sends packets and sees how many have arrived.
This script existed before I got to it and it used fork() to spawn off a new thread to run the SSH command, and then sent the packets in the main thread. I had to port this previously stand-alone program into a Perl module so we could use it on our web-based product, and so I had to replace the fork() with threads because you don't wanna use fork() where Apache is involved.
It worked fine using threads but then we had a need for these remote servers to come to our control panel server to request a CGI front-end to this module, to see how their ports are doing (so that the remote server owner could set up alerts in case ports stop being available). For this we were going to use lighttpd and FastCGI, but ran into a rather bizarre problem.
Here's kinda how the code in the module works:
Code:
sub TestUDP {
my $server = shift;
my @ports = shift;
my $hostname = `hostname`;
foreach my $p (@ports) {
my $thr = threads->create (sub {
my $cmd = "ssh client$server 'tcpdump -l -i port 5060 and host $hostname'";
my @out = `$cmd`;
return scalar(@out); # return no. of packets read
});
sleep 5; # give time for SSH to initialize
# Send packets.
my $sock = IO::Socket::INET->new (
PeerAddr => "client$server",
PeerPort => 5060,
Proto => 'udp',
);
for (1..20) {
$sock->send("x\n");
}
# Kill the tcpdump process.
system("ssh client$server 'killall tcpdump'");
# This causes the cmd in the thread to exit and
# return its results, so...
my $read = $thr->join();
print "20 packets sent, $read packets received\n";
}
}
Anyway, in lighttpd and FastCGI the threads seem to throw things off. Adding a lot of prints shows that the thread starts and rejoins properly, and it will do this for all 3 ports I send into the module to test, but as soon as the last port finishes being tested and that foreach block goes out of scope, the program mysteriously exits. No error messages, it just stops, as though an exit() was there.
So, I've been trying to get away from using threads. Here are my test scripts:
Code:
# tcpdump.pl
my $serv = shift || 1358;
my $pid = open (SSH, "ssh client$serv tcpdump -i any -l port 5060 2>>/dev/null |");
#my $pid = open (SSH, "ssh client$serv tcpdump -i any -l port 5060 and host devbox.example.com 2>>/dev/null |");
$|++;
eval {
local $SIG{ALRM} = sub { die };
alarm(300);
while (<SSH>)
{
print;
}
alarm (0);
};
if ($@) {
kill (9, $pid);
}
Code:
# test.pl
print "Testing\n\n";
print "Opening SSH sock\n";
my $serv = shift || 1358;
my $cmd = "perl tcpdump.pl $serv";
my $pid = open (SSH, "$cmd |");
print "Sending packets in 5 secs\n";
use IO::Socket::INET;
sleep 5;
my $sock = IO::Socket::INET->new (
PeerAddr => "client$serv",
PeerPort => 5060,
Proto => 'udp',
) or die "Can't connect: $!";
for (my $i = 0; $i < 20; $i++) {
print "Sending 'x\\n'\n";
$sock->send ("x\n");
}
print "Packets sent. Waiting 5 secs\n";
sleep 5;
print "Reading from SSH handle.\n";
my @blah;
eval
{
local $SIG{ALRM} = sub { die };
alarm (10);
while (<SSH>) {
push (@blah,$_);
print "got: $_\n";
}
alarm (0);
};
if ($@) {
print "Timeout expired\n";
kill (9, $pid);
unless (scalar(@blah))
{
print "Attempt 2\n";
my $out = `$cmd`;
print "out: $out\n";
}
}
print "results: " . join(";;",@blah);
Ultimately, tcpdump.pl in the end would be wrapped inside a C program that is setuid to root, since root is the only user that can SSH from server to server without passwords. But these two lines:
Code:
my $pid = open (SSH, "ssh client$serv tcpdump -i any -l port 5060 2>>/dev/null |");
#my $pid = open (SSH, "ssh client$serv tcpdump -i any -l port 5060 and host devbox.example.com 2>>/dev/null |");
is where the problem is. The first line works. As soon as packets come in, the SSH handle can be read from and printed to the terminal, which means the test.pl's SSH handle can get this line as well. It's line buffered like it should be.
For some reason though, the second line, which checks for the hostname to watch packets for, isn't line buffered. It won't return the packets until the process is killed, and if it's killed, then for some reason the filehandle returns EOF immediately and nothing can be read from it.
This command that checks on the hostname is the same as the command that was run in the original threaded model, and it has the -l option which should line buffer the output, but for some reason it won't line buffer.
Does anybody know why? Or, does anybody know if lighttpd+FastCGI is known to have a problem with Perl threads?
-------------
Cuvou.com | My personal homepage
Code:
perl -e '$|=$i=1;print" oo\n<|>\n_|_";x:sleep$|;print"\b";print$i++%2?"/":"_";goto x;'