Hello everybody,
I wrote a little script that's supposed to monitor our print server. It checks each print job in the print queue for several states (Delivery Problem, Failed, Aborted).
For every state I perform a separate find command and store the output in a variable.
e.G.
The SpoolIn directory contains a subdirectory for each printjob and every printjob subdirectory itself contains one subdirectory for each step being performed within the printjob. The jobstate is being found within the job.history file.
So there's quite a large directory structure the find command has got to search that might contain more than 5.000 subdirectories plus several files within each subdirectory.
Now - as you might have guessed already ;-) - all those find commands take quite a long time to complete ...
Is there any way to improve the script to increase the performance and reduce the search time ?
Best Regards,
Thomas
I wrote a little script that's supposed to monitor our print server. It checks each print job in the print queue for several states (Delivery Problem, Failed, Aborted).
For every state I perform a separate find command and store the output in a variable.
e.G.
Code:
chk_1b=$(ssh spooler "find /data/SpoolIn -name job.history -exe
c grep -l FAIL {} \; | wc -l" | awk '{print $1}')
chk_1c=$(ssh spooler "find /data/SpoolIn -name job.history -exe
c grep -l ABOR {} \; | wc -l" | awk '{print $1}')
The SpoolIn directory contains a subdirectory for each printjob and every printjob subdirectory itself contains one subdirectory for each step being performed within the printjob. The jobstate is being found within the job.history file.
So there's quite a large directory structure the find command has got to search that might contain more than 5.000 subdirectories plus several files within each subdirectory.
Now - as you might have guessed already ;-) - all those find commands take quite a long time to complete ...
Is there any way to improve the script to increase the performance and reduce the search time ?
Best Regards,
Thomas