Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

RM COBOL I/O error 94,67

Status
Not open for further replies.

SiouxCityElvis

Programmer
Jun 6, 2003
228
US
I'm working with importing a huge flat file approximately 1 Gig.

SELECT UCC-INPUT ASSIGN TO
"/nic/data/UM000204_01.txt"
ORGANIZATION IS LINE SEQUENTIAL
ACCESS IS SEQUENTIAL.

This is what is on my line 435:
OPEN INPUT UCC-INPUT.

The program compiles clean, but gets a runtime error as follows:

COBOL I/O error 94,67 on UCC-INPUT file /nic/data/UM000204_01.txt.
COBOL I/O error at line 435 in program NIC_IMPORTUCC.COB

Unfortunately, the book I have has a 94,66 description but no 94,67 error description on page A-28 of my User's Guide.
Do you know what a 94,67 error is?

The files I have in the /nic/data directory are and I'm only referring to the first right now:

UM000204_01.txt
UM000204_02.txt
UM000204_03.txt
UM000204_04.txt
UM000204_05.txt
UM000204_06.txt

Thanks.
-David
 
David,

You can find this error described in the User's Guide for the version of RM/COBOL that is producing the diagnostic. The error, shown below, has to do with large files. If you have version 8, check the User Guide PDF file on the CD, approximately page A-32.[ul][li]94, 67 The file is too large. An attempt was made to open a file that is too large for this system. The file was probably created on another system using the LARGE-FILE-LOCK-LIMIT configuration keyword or is a version 3 indexed file, or an attempt was made to use a LARGE-FILE-LOCK-LIMIT value on a system that does not support files larger than 2 GB. See the description of the LARGE-FILE-LOCK-LIMIT keyword in the RUN-FILES-ATTR record on page 10-42, or the description of File Version Level 3 files on page 8-69 for more information.[/li]

Tom Morrison
 
David,

The RM/COBOL Version 8 User's Guide


94, 67 - The file is too large. An attempt was made to open a file that is too large for this system. The file was probably created on another system using the LARGE-FILE-LOCK-LIMIT configuration keyword or is a version 3 indexed file, or an attempt was made to use a LARGE-FILELOCK-LIMIT value on a system that does not support files larger than 2 GB. See the description of the LARGE-FILE-LOCK-LIMIT keyword in the RUN-FILES-ATTR record on page 10-42, or the description of File Version Level 3 files on page 8-69 for more information.

For a sequential file try using this in a configuration file:

RUN-FILES-ATTR LARGE-FILE-LOCK-LIMIT=4
RUN-SEQ-FILES USE-LARGE-FILE-LOCK-LIMIT=YES

Make sure you use the same LARGE-FILE-LOCK-LIMIT configuration for all systems that access sequential and relative files.

What operating system are you running on? If you are running on SCO OpenServer 5, then that OS does not support large files. The largest sequential file you can open with RM/COBOL will be 1GB.

-Robert Heady
Liant Software Corp.
 
Thanks Liant folks for replying.

I am running on Linux.

When you say "For a sequential file try using this in a configuration file:

RUN-FILES-ATTR LARGE-FILE-LOCK-LIMIT=4
RUN-SEQ-FILES USE-LARGE-FILE-LOCK-LIMIT=YES
"
Do you mean I should create a separate file such as Config.txt and put the RUN-FILES-ATTR stuff in there?
And then when I run something I would do a:
runcobol PROGRAM.COB C=Config.txt

Also, I'm not sure what you mean by
"Make sure you use the same LARGE-FILE-LOCK-LIMIT configuration for all systems that access sequential and relative files."
Does that mean anytime I run a cobol program on that Linux box that I have to us the C=Config.txt option when I do so?

Thanks.
-David
 
If my files are all just barely over 1 Gig each, would there be a good solution in splitting each into halves? I'm wondering if I can avoid all this configuration code by doing this, or if the configuration code is really the best route to approach the problem with.

Thanks.
-David
 
David,

Are you running on a Linux that allows large files (e.g. Linux 7.3 or higher)? Are you using the version of RM/COBOL for Large File Linux? The runtime banner will tell you this.

If the version of Linux you are running does not allow large files then the version of RM/COBOL you are using can't open the file.

If the version of Linux you are running does allow large files and if the runtime does not support large files then you will get a configuration error 409 if you try to use those configuration records.

For record and file locking to perform correctly, all run units opening a file must use the same file lock limit.

You can add those configuration records to your existing configuration file and execute the programs with the C= option.

You can also load the configuration file automatically by creating a configuration file named runcobol.cfg and copy the file to the runtime execution directory (usually /usr/bin). The shared object librmconfig.so should also be in that directory. This way every time you execute runcobol the configuration file is automatically loaded.

-Rob
 
Thanks all.
The solution of setting options on my config file worked
with:
RUN-FILES-ATTR LARGE-FILE-LOCK-LIMIT=4
RUN-SEQ-FILES USE-LARGE-FILE-LOCK-LIMIT=YES

I tested with only the first file(1 GIG flat file) and it took nearly an hour to read it all in and populate into index files. So, I guess with 6 flat files at approx 1 Gig each, it'll take a total of around 6 hours.
LOTS 'O DATA!

-David
 
David,

If this is going to be a regular event, you might want to read the User Guide information about indexed file performance. Out-of-the-box performance is balanced between speed and memory use. It is possible that you can improve your 6 hours by configuring for more memory.

Tom Morrison
 
Okay. Upon running this import application where I take the raw data(6 different files all about 1 Gig each), I noticed that the sum of the sizes of my 7 different Indexed files I populate is only about 1.6 Gigs.

I put display statements to show when each import file was finished (in the AT END Clause of its READ statement) and it verifies to me that each raw data file of 1 Gig is getting to the end.

What I'm wondering is why my 7 different index files only add up to 1.6 Gigs versus the sum of my 6 different raw data files of 6 Gigs?

Since every record of the raw data is being used and the record size of each index record I populate with it is the same length(if not longer with additional alternate keys) I had the above question.

I did a bit of research in the Manuals and it states on page 4-17 that the DATA-COMPRESSION default is YES. So, would this be the reason my Index files sizes sum up to only 1.6 Gigs?

Thanks again.
-David
 
Yes, that is the reason.

You can have data compression and/or key compression, and as you see it can make a HUGE difference.

When using data compression on a file that gets lot's of rewrites and delete/add records it is advisable to reorganize the file frequently (once a week/month).




Regards

Frederico Fonseca
SysSoft Integrated Ltd
 
David asks, "...DATA-COMPRESSION default is YES. So, would this be the reason my Index files sizes sum up to only 1.6 Gigs?"

Yes, that is the reason, along with, to a lesser extent, key compression.

You may use the rmmapinx utility to determine the number of records contained in the indexed file.

Tom Morrison
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top