Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Sepparating stderr and stdout... puzzled...

Status
Not open for further replies.

maccle

Programmer
Feb 12, 2004
3
SE
Okay, trying to sepparate out stdout and stderr. By all means just tell me there is a better way, but...
I have the following function to pull out the two streams and stick them in sepparate files, while also providing them as a parsable return value to the calling code.
I have pasted 2 code segments, the first is the function including a debug line creating a file of the unprocessed command output. The second was a test for processing the output to see where my error was. Unfortunately the test works, but the function doesn't... I'm stumped:-

#function to take a single command and write the outputs (stdout and stderr) to sepparate files, and the consol
#NOTE: if multiple commands are passed in a single string, only the execution
#of the last will have it's output captured
sub execute
{
my $cmd = shift;
my $retval ="";
my ($output, $line);
#prepend each STDOUT line with "@#STDOUT%:"
$output = `($cmd|perl -e"while(<STDIN>){s/^/\@\#STDOUT\%:/;print;}")2>&1`;
#DEBUG Output to file###########
print PURE $output;

#Split the output to multiple lines and process
foreach $line (split(/\n/,$output))
{
#If the line has the prepend
if ($line =~ s/^\@\#STDOUT\%://)
{
#print to file
print OUT "$line\n";
#print to consol
print "$line\n";
#add to the return value
$line =~ s/^/STDOUT:/;
$retval = $retval."$line\n";
}
else
{
#print to file
print ERR "$line\n";
#print to consol
print "$line\n";
#add to the return value
$line =~ s/^/STDERR:/;
$retval = $retval."$line\n";
}
return $retval;
}
}

As mentioned at the start; the debug line is to allow me to capture and run the same output through a test routine:-

open(PURE, "<./PureOutput.txt");
#declare in a block to prevent the changes in special variables propagating
{
#unset the block sepparator
$/="";
#read in all the file
$output = <PURE>;
}

#Split the output to multiple lines and process
#(this is exactly the same as in the execute function)
foreach $line (split(/\n/,$output))
{
#If the line has the prepend
if ($line =~ s/^\@\#STDOUT\%://)
{
#print to file
print OUT "$line\n";
#print to consol
print "$line\n";
#add to the return value
$line =~ s/^/STDOUT:/;
$retval = $retval."$line\n";
}
else
{
#print to file
print ERR "$line\n";
#print to consol
print "$line\n";
#add to the return value
$line =~ s/^/STDERR:/;
$retval = $retval."$line\n";
}
}
#print $retval;
exit;

Now, running the function against a set of commands yealds rubbish in the consol and the STDOUT (They have what seems to be only the 1st and last lines of each command output) the pure output debug file and the STDERR output file seem fine.
If I then run the pure output debug file through the test script then I get the expected output in all files and the consol. Even the return values are fine.
Cannot understand why one works and the other doesn't... ideas?

Malc
 
Phew. That's a lot of code and it's Sunday. You could always
Code:
use File::Temp qw/ :POSIX /;
my $errfile = tmpnam();
my $stdout = `$cmd 2>$errfile`;
my $stderr;
if ( -s $errfile ) {
  open( local E, $errfile );
  local $/; # slurp
  $stderr = <E>;
}
unlink $errfile; # allow silent failure

or

Code:
use File::Temp qw/ :POSIX /;
my $errfile = tmpnam();
{ # localisation block gives auto-close on O
  open( local O, "$cmd 2>$errfile |" );
  process_stdout($_) while <O>;
}
if ( -s $errfile ) {
  open( local E, $errfile );
  process_stderr($_) while <E>;
}
unlink( $errfile );
if there's likely to be a lot of output.

It's much easier to read (IMHO) and, therefore, less likely to surprise you by doing something unexpected.

Yours,

fish

[&quot;]As soon as we started programming, we found to our surprise that it wasn't as easy to get programs right as we had thought. Debugging had to be discovered. I can remember the exact instant when I realized that a large part of my life from then on was going to be spent in finding mistakes in my own programs.[&quot;]
--Maur
 
You are right, simpler cleaner and more maintainable... I had been trying to keep the script off the disk as much as possible as the commands it runs are pretty disk intensive already. But that probably isn't worth the confusion.
Would still be interested if anyone can work out why it works in one place and not another. Used to that happening in C++ or such, where you can screw up the memory in the rest of the program, but perl is usually more consistent.

Malc
 
There are loads of similar threads in this forum and they usually end up using a file. If there's not much data and you're deleting it almost immediately then it might not get beyond the filesystem cache RAM anyway. Conversely, if there's a huge amount of data and you avoid accessing the disk, then some RAM might get swapped out and you lose as well!

If you use strace to see the number of files opened and read just to start up a dynamically linked process, then a few extra file accesses per command invocation is neither here nor there.

As to understanding the rest of your code, I'll try and grab some more time this evening but I can't promise.

Yours,

f

[&quot;]As soon as we started programming, we found to our surprise that it wasn't as easy to get programs right as we had thought. Debugging had to be discovered. I can remember the exact instant when I realized that a large part of my life from then on was going to be spent in finding mistakes in my own programs.[&quot;]
--Maur
 
Thanks fish, much appreciated.
Your code works, so the explanation of mine is more academic. Still interested, but whenever you have time.

Continuing academic:
So performance probably not an issue. The only thing I have left is the order, ie in what I was trying to do I get the output from the command in order, eg a couple of lines of stdout, a line of stderr, some more lines of stdout. Can sometimes be useful in debug, but not a real big deal.

Malc

P.S. Obviously didn't search hard enough for other threads on this. Sorry.
 

Code:
$output = `($cmd|perl -e"while(<STDIN>){s/^/\@\#STDOUT%:/;print;}")2>&1`;

Look at this line: I think its a typo there.
I think it should be <STDOUT> instead of <STDIN>

like this:
Code:
$output = `($cmd|perl -e"while(<STDOUT>){s/^/\@\#STDOUT%:/;print;}")2>&1`;
 
sebastiannielsen: I don't agree. It's STDOUT from $cmd's point of view but it's STDIN from the perspective of the perl one-liner. The shell's pipe operator connects $cmd's STDOUT to perl's STDIN - think plumbing: each thing's output is the next one's input. To underline the point, <> is a read operator and you can't (usually) open STDOUT for reading.

f

[&quot;]As soon as we started programming, we found to our surprise that it wasn't as easy to get programs right as we had thought. Debugging had to be discovered. I can remember the exact instant when I realized that a large part of my life from then on was going to be spent in finding mistakes in my own programs.[&quot;]
--Maur
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top