Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Working-Storage Problem 1

Status
Not open for further replies.

anubhavrastogi

Programmer
Nov 1, 2005
3
US
We hit the max limit of Working Storage Section in our modules. We reached the limit due to large WS tables defined in our modules which we use for further processing. Clause 'EXTERNAL' is also used to make the data available to other modules. These WS tables are populated from DB2 tables. Is there any way i can go away with WS table process so that we don't hit the limits and also new approach doesn't degrade the performance?

Data (rules/profiles) is written to WS table to make the SEARCH of these faster for later processing. I believe writing to file or fetching from DB2 for each customer may cause a problem and impact the performance.

This whole process works for some 8 million customers going for each case thru DB2 query i mean it's surely a degrade in performance as compared to Cobol SEARCH.
 
You have a WS table with 8 million occurrences ?
Provided your DB2 tables are properly indexed and your SQL decently optimized give DB2 a chance !

Hope This Helps, PH.
Want to get great answers to your Tek-Tips questions? Have a look at FAQ219-2884 or FAQ181-2886
 
The Working-storage contains the various fields which may be assigned to these 8 million customers. Every night these 8 million customers are processed and if they meet the eligibility criteria they may be evaluated positively for one of these fields in the working storage.

Now eligibility is checked for these benefits. At a time 1 customer can get 5 types of benefits. Each benefit has a profile, eligibility critieria and some override criteria... To have good performance it is this benefit data which is stored in the working-storage and not the 8 million customer record information.

In the nightly cycle, for each customer from Db2 this check is done by comparing the benefits (which are already stored in the working-storage in the first call).

Please let me know if more details are needed.
 
anubhavrastogi said:
Is there any way i can go away with WS table process so that we don't hit the limits and also new approach doesn't degrade the performance?

I believe what you are looking for is the Cobol "automagic" directive!

Code:
*COBOL AUTOMAGIC=ON

Sorry, I couldn't resist! [smile]

BTW, if anyone gets that to work, please let me know! [wink]

Code what you mean,
and mean what you code!
But by all means post your code!

Razalas
 
From the mention of "DB2" - can I assume that this is an IBM z/OS environment issue? If so, you should look at upgrading to the latest release of Enterprise COBOL (V3.4) which INCREASED the size of various WS limits.

See:

If this is NOT a z/OS issue, please tell us what compiler and operating system you are on.

Bill Klein
 
VS COBOL II is a LONG unsupported compiler for z/OS. It had LOTS of limits that have been "relaxed" in newer IBM compilers. I would STRONGLY suggest that you try and get your shop upgraded to a CURRENTLY SUPPORTED IBM COBOL compiler.

(The latest release of DB2 explicitly EXCLUDES support - for some features - for the older pre-Language Environment compilers such as VS COBOL II).

Bill Klein
 
Build a temporary indexed vsam file and load it up and use random access to find the data. This might take longer. We store huge amounts of data in a System Dictionary we use all the time. Stuff like all of the Accounts for accounting and system codes for lookup. You have to decide what storage is more used by the program. Most likely this would slow down a program a bit. If you have a table where you usually want one and only one item it may still be pretty fast with Random Access in a keyed file.

If you do not like my post feel free to point out your opinion or my errors.
 
Build a temporary indexed vsam file and load it up and use random access to find the data. This might take longer. We store huge amounts of data in a System Dictionary we use all the time. Stuff like all of the Accounts for accounting and system codes for lookup. You have to decide what storage is more used by the program. Most likely this would slow down a program a bit. If you have a table where you usually want one and only one item it may still be pretty fast with Random Access in a keyed file.

There might be a way to load part of the table if the table and the data are sorted properly. Then there is the use of packed fields. There is probably some way to fool the compiler through some trick that violates the storage limits but makes it possible to use. There is always breaking up the program like one program calling another. Everything you can think of may slow it down a little.

If you do not like my post feel free to point out your opinion or my errors.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top