Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Byte size of COMP field 4

Status
Not open for further replies.

gopikrishna

Programmer
Sep 8, 2003
6
IN
Hi,
I am working on conversion from VSAM to DB2. The VSAM layout has numeric field declarations like

Variable1 PIC S9(05)V99 COMP.

I cannot convert this field into an INTEGER type field in DB2 table because of pic clause S9(05)V99. I could not find any information about how to calculate the size of such declarations. Can anybody help me out in determining the size of such strange declarations?
 
Hi gopikrishna,

It's not that strange [smile]

COMP means 'the most efficient form for computation'. In almost all environments (except the AS/400) this is binary. If it is binary it will occupy four bytes in the VSAM record.

When it comes to converting to DB2 what you really need to know is the range (and precision) of values which it can hold, and your example looks like decimal(5,2) or number(5,2) - I forget the DB2 syntax.

A caveat. It is possible (depending, perhaps, on compiler options) to hold numbers bigger than the Picture definition in binary fields, up to the maximum value which can be held in the space available.

Enjoy,
Tony

------------------------------------------------------------------------------------------------------
We want to help you; help us to do it by reading FAQ222-2244 before you ask a question.
 
Not all COMP fields are 4 bytes in the IBM Mainframe world. Depending on the size of your field (EX S9(2) COMP) the size may actually be 2 bytes. In IBM Mainframes it will be either 2 or 4 bytes. If the max value of your field definition (based on number of digits) can fit in 2 bytes, Cobol will generate a 2 byte binary field. Otherwise it will be 4 bytes.

There is also a little thing like alignment. Comp fields in some cases in the IBM mainframe will be aligned on a full word boundary which can cause gaps in your record layout if you are not careful. Again, this is probably not a concern in what you are doing.

I agree with Tony above. You probably just need to worry about range and precision.

etom
 
Hi G,

The 1st place you should go for answers is the SOURCE. In this case IBM. Some web sites have links to IBM manuals.

Try Click on the "IBM Manuals" button and follow the bouncing ball to the COBOL Reference Manual for the compiler you're using; go to the index, find COMP, read and learn.

You'll avoid a lot of conflicting info and learn at the feet of masters.

If you're then confused, come here and we'll battle it out.

Regards, Jack.
 
On IBM mainframes, COMP fields whose PICtures have more than nine 9's are eight bytes long, and if SYNCRONIZED, are allinged on a double-word boundary.
 
For the storage allocated for USAGE COMP or BINARY fields on a z/OS (or OS/390 or even MVS) system, see the table with:
"Storage occupied"

at:

Note; Contrary to what it says, I do *not* think that the TRUNC compiler options impacts how much storage is allocated, just what values are allowed and how some "odd" hex values are interpreted.

NOTE2: Ignore the "V" in the Picture clause, and just count the "9's" when determining the storage allocation.

NOTE3: This table is ONLY valid for IBM z/OS (or equivalent) systems. How COMP (and even BINARY) data is stored is TOTALLY implementor defined and (contrary to what was stated above) need NOT be the most efficient - either in performance or storage allocation - of numeric data types.

Bill Klein
 
The original definition of COMP was "the most efficient form for computation", but compatablity issues have caused it to stray from there.
 
Although I have seen that wording before, I don't know where it "originated". I *know* it is not part of the '85 Standard (much less the 2002 Standard). I believe I searched once and it was NOT part of the '74 Standard either.

When you say that it was the "original meaning" - do you know when or where this "meaning" was given?

Bill Klein
 
How difficult can it be to make a small testprogram that tells you the size. Something like:

..
01 MYCOMP-FIELD PIC S9(5)V99 COMP.

..

DISPLAY LENGTH OF MYCCOMP-FIELD.

..

Many compilers also produce an offset listing of the variables in storage with their length.

Regards,

Crox
 
To add to Crox's answer, there are loads of tools out there designed for this very purpose.

Starting with your own compiler detalied listing - which many programmers routinely ignore.

My favorite tool is DDL (Data Definition Language). I don't know of many tools better than DDL - which can generate COPYable source (of a record definition) for many languages. Invaluable when using COBOL and other languages to exchange data.

COMP fields have generated controversy and confusion very early on. COMP fields are the least compatible data types in COBOL. Not only do compilers and platforms disagree on their meaning, but other languages choke on it.

To their credit, COMP fields were useful when memory and storage were scarce. These days, its hard to justify its continued use. COBOL is increasingly sharing executables and data exchange services with other languages (ILE, CRE, COM,...), and the single most stumbling block is COBOL specific data types such as COMP.

Luckily, many COBOL compiler designers have solved this problem by incorporating native data types in COBOL (my compiler allows the use of of a data type called NATIVE to describe binary data). Today, you can safely exchange INTEGER data types between COBOL and other languages - instead of trying to wrestle COMP fields into submission.

The major difference between COMP and INTEGER is in the minimum and maximum values they can hold.

COBOL does not have a proper equivalent data type to INTEGER (as seen in C, etc...). An integer can hold up to 5 digits (32,767 or 65,535 - unsigned). PIC 9(4) COMP holds up to only 4 digits: 9,999. And PIC 9(5) COMP goes to 99,999: more than an INTEGER can hold. But, COBOL NATIVE data types use the exact equivalent to INTEGER.

On my compiler, NATIVE-2 uses 2 bytes, NATIVE-4 uses 4 bytes, and NATIVE-8 holds 8 bytes. These data types translate exactly to integers such as short, long, double or whatever your OS or non-COBOL compiler supports.

When calling procedures written in a language other than COBOL, it is safer to use NATIVE data types rather than any of the other computational data types that COBOL supports. NATIVE eliminates data overflow and truncation. Simply find out how many bytes the INTEGER field on your platform is using and select the appropriate NATIVE type.

Dimandja
 
A star for Dimandja!

However, he says, "When calling procedures written in a language other than COBOL, it is safer to use NATIVE data types rather than any of the other computational data types that COBOL supports."

The RM/COBOL developer toolkit has a tool named CodeBridge that builds wrappers for other language (typically C) procedures (such as Windows APIs). This tool was created as a response to the difficulties of calling such procedures. The nice thing about CodeBridge is that it allows the COBOL code to remain very COBOL-like in its use of COBOL data types, etc. The wrapper does all the conversion and, to a reasonable extent, provides diagnostics for the overflows, etc, that Dimandja describes.

Also, I agree with Bill Klein. The 1974 COBOL standard says, "A COMPUTATIONAL item is capable of representing a value to be used in computations and must be numeric." I can say with certainty that it was possible to have COMP mean a single (1) binary digit/octet and be certified compliant by the US government; few would argue that that particular representation represents their measure of efficiency (although it came about when memory size constraints might force different efficiency metrics to be used).



Tom Morrison
 
It was my statement that COMP meant the most efficient form for computation. OK, so I was wrong [smile]. I think that was the intent, if not the definition. And, yes, the representation of most data formats is implementor-defined at least to a degree.

The question here, if gopikrishna is still with us, was about how to determine the required DB2 declaration, equivalent to the COBOL definition he has. The answer, of course, is that it depends. Further information can be obtained from compiler listings, the data itself, and any documentation available.

It is LIKELY that any type capable of holding a number between -99999.99 and +99999.99 will do. As far as I know, there is no DB2 type available which will result in a COBOL definition of S9(5)V99 COMP, so some manipulations in code may be required, depending on what exactly is being done and what is required of the conversion. Cobol data types available in different compilers and/or environments are not really the issue. gopikrishna is after a DB2 declaration and is constrained by the Cobol which DB2 will generate.

Enjoy,
Tony

------------------------------------------------------------------------------------------------------
We want to help you; help us to do it by reading FAQ222-2244 before you ask a question.
 
Oh yeah, gopikrishna[/i]'s question.

In DB2 you should be able to use DECIMAL(5,2) to map to PIC S9(5)V99 COMP.

Dimandja
 
Before we can ans G's ques we need more info. Does he want to keep it as a binary value? Or does he want or will accept a conversion to packed decimal?

Defining the field in DB2 as DECIMAL(5,2) effectively defines a comp-3 COBOL data item.

Is that what D wants?

Regards, Jack.
 
Regardint the earlier statement,

"COBOL does not have a proper equivalent data type to INTEGER (as seen in C, etc...). An integer can hold up to 5 digits (32,767 or 65,535 - unsigned). PIC 9(4) COMP holds up to only 4 digits: 9,999. And PIC 9(5) COMP goes to 99,999: more than an INTEGER can hold. But, COBOL NATIVE data types use the exact equivalent to INTEGER."

Although they are still "processor dependent", the 2002 COBOL Standard *does* include (semi-)well defined solutions to this issue with the new BINARY-CHAR, BINARY-SHORT, etc USAGEs.

As far as I can tell these "match" operating system "native" binary data types. They do NOT include any PICTURE clause - but allocate such fields by "storage" size - with or without internal signs.

Ask your "vendor of choice" when they plan on implementing these (and the new floating point) data types.


Bill Klein
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top