Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Getting multiple background process return codes in parent KSH

Status
Not open for further replies.

marbaise

Programmer
Aug 1, 2002
8
BE
Hi,

I'm actually looking for some ideas regarding background process return code handling.

In KSH, I would like to be able to get return codes of a bunch of background processes launched by the parent.

Aim is to return success when all background process is success and a warning when at least one of the child has failed.

I'm looking for some suggestions...

Flow should be:
-Parent ksh is started.
-For each file in a list, run a background process
-Parent wait for completion
-After completion of all childs, check if none of them failed
-Log result
-End

Thanks for your help.

Philippe.
 
Hi Philippe,

There have been several threads on similar subjects on this and/or other UNIX forums. Have you tried searching them ?

I think what these threads have suggested is that each child process finishes with a status (0 = success and non-zero = failure) passed to the parent from the exit <n> command. The parent script tests this status and runs the next child if successful or errors out if not.

If you require further help, then please re-post.

I hope that helps.

Mike
 
Hi Mike,

I've searched all forums known to me and I've found solution like yours. My goal here is not to perform a sequence of script (this does not worth the effort to put them in background in these case, except for deamons).

My goal here is to split a huge work in ten parts and run them in parallel. The success of the operation can only occur when all ten parallel jobs have terminated succesfully.

Objective is to reduce the timeframe to perform the operations (Huge Database cleaning operations) to lower the unavailability of the environment to a minimum.

So far I've ended up with a not so nice solution where the status is saved in temporary file.

See below the proof of concept:

for NUM in 1 2 3 4 5 6 7
do
(sleep $NUM ; echo $NUM terminated; echo $NUM > multiplesubprocessrc.$NUM.log;exit $NUM) >> multiplesubprocessrc.log &
done
wait
echo parent recieved child completion
for FILE in multiplesubprocessrc.*.log
do
read RC < $FILE
echo $RC from $FILE
done

My questions is more: "Is there a way to avoid writing to a temp file". I've read a lot regarding the wait command. But never seen examples working the way I want.

Hope this is a little clearer than before.
I've seen a thread in this forum but it was just concerning a one parent one child relation. Not a one parent multiple childs. The number of childs will be variable depending on the number of scripts launched in background.
(for..in..do loop on a list of script )

I'm really open to suggestions. ;-)

Philippe
 
The background scripts may send a signal to the waiting shell in case of error.
In your shell man page pay attention to $$, export, trap and kill.

Hope This Helps, PH.
Want to get great answers to your Tek-Tips questions? Have a look at FAQ219-2884 or FAQ222-2244
 
You can use flag files and schedule the jobs using at command. This is more transparent to operators and easier to check in terms of process flow.

e.g script1 - does what you want to do
e.g script2 - does something else
e.g script3 - does even more

script4 is your control script (which is the parent)each of the scripts (script1,script2,script3)creates a flag to show it is 'active' and another flag 'finished' once it has completed.

You can then use script4 to schedule any dependencies that you may have or execute them in parallel using the at command e.g in script4

at now script1
at now script2
at now script3

This will then run all the scripts 'in parallel' you can check the progress via the flag files or by using at -l to see how the jobs are progressing.

Hope this helps
 
PHV,
Thanks for your suggestion, I will investigate.
idiotboy, (what a nickname... :) )
I cannot use your suggestions because the launch of the script in my environment is completely out of my hands. Interactive operationnal tasks are forbidden. We use Tivoli Job Scheduling System for all our batch scheduling due to a lot of dependencies between machine and applications. (My company size is really big - we have a few hundred systems.) A dedicated team is in charge of all operationnal tasks, they just have to react in case of failure. All the batch we design must handle the task from begin to end and respond either success warning or failure.

In my case, as the number of subprocess will not always be the same from one run to another (in order to allow more flexibility). It is not easy to use the "at now" command and automate the response handling (or I'm too dumb to see how to do it, which is another thing...)

Thanks to both of you for your efforts...

Philippe
 
Your proof of concept design is exactly what I would do.

Mike

"Deliver me from the bane of civilised life; teddy bear envy."

Want to get great answers to your Tek-Tips questions? Have a look at faq219-2884

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top