Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Writing massive amounts of data in COBOL.

Status
Not open for further replies.

HaydenClark

Programmer
Oct 10, 2006
7
ZA
Hi.

We are receiving mutiple text files which we then FTP over to our I64 VMS machine and process via a set of cobol programs to write out new indexed files for use in our system.

The problem that I am encountering is that the conventional way of writing the records is taking way too long. We are talking about truly gargantuan amounts of data so any tips to improve the performance of the programs would be greatly appreciated.

Thanks.
Hayden.

"Of all the things I've lost I miss my mind the most.
 
VMS operating system version and COBOL Vendor/version please. that may help some of us point you to some options.

I'm not working with VMS at the moment, so just going point a possible option, which may or not be available to your os/cobol version.

You may have a OS and/or COBOL tool that will accept a flat file as input and create a indexed file for you.
Many times, but not always, writing a flat file from COBOL, and then indexing it outside the COBOL program is faster by a significant amount of time. If you have this option available, try it.

Regards

Frederico Fonseca
SysSoft Integrated Ltd

FAQ219-2884
FAQ181-2886
 
Usually with large amounts of data, the buffer size is at issue. If you want good performance, try to get your files to have buffers that match the sector size of the media you are reading and writing to and your programs to process the same amount of units at one time. It can be scary sometimes to the uninitiated to see how much difference this one factor can make.

Making that happen will vary depending on your OS version and COBOL vendor/version. So please provide that.

It is not possible for anyone to acknowledge truth when their salary depends on them not doing it.
 
If the records are in some random order, the easiest way I know of to speed up the process is to write them to a non-Indexed file, then sort the records before writing to the Indexed file.

However, not all indexing systems work well with sorted records. Increasing the buffer size may be the best way to increase speed.
 
If there is some "load utility" available, this will nearly always outperform code that writes individual entries into an indexed file.

You might consider generating "loadable" files and sorting them in the proper sequence on the mainframe before transmitting them to the target system for loading.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top