Thanks Steve. Appreciate your quick response on this.
The program (rather transaction) will be shut down in two scenarios - 1. when the program receives a "special message" saying 'STOP THE EXECUTION' or 2. the operator terminates it before bringing the CICS down at the day end.
The program processes the message and go to sleep mode.
When a new message comes to queue, it gets up (wake mode)
and process the message(s) and on completion again goes to sleep mode. (we use Signaling feature). This transaction will be alive until one of the above scenarios happen.
Yes it's a COBOL program and we uses MQ functions like MQOPEN, MQGET, MQPUT, MQCLOSE to deal with the queues.
The messages are read into Working storage (WS size is defined to store 2.5 MB as we anticipate huge message).
We are not using GETMAIN and hence FREEMAIN right now.
Steve, like to confirm my understanding with you. It's that when the program is hit for the first time (by the transaction - which will be in wake/sleep mode all day time)
working storage space will be allocated. For subsequent messages again the transaction hits the same 'live' program - which will use the same space of WORKING STORAGE it has allocated earlier for the first message.
In simple words, the WORKING Storage space allocated for processing 1st message, the "same WS space" space will be used for 2nd, 3rd, 4th... so on messages. am I right?
If this is the case then I wonder why our storage administators report to us that our transaction is acculumating space for each hit/message processing and they advise us to free the storage occupied; i.e., to tune the program to handle the problem. Any futher thoughts on this?
For time being since we are in test region, we terminate the transaction in regular period - which releases or clear all acculumated storage by the program, this can't be afforded when we go live to production (real-time scenario).
Hope I make clear and make sense. Thanks for your valuable help and interest.
If anything not clear or specific, I can detail further.
Appreciate for your time, Thanks again! - Vinodh