Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Transformer Model stopped working after getting to large 1

Status
Not open for further replies.

cfrench

Programmer
Mar 3, 2003
23
0
0
US
Please assist if possible...

I created a model that has a lot of dimensions. The cube ran fine and updated with scheduler daily for approximately 2 months but now it wil not refresh through scheduler and I cannot open the pyi file to view or make adjustments or refresh manually. The error is REPOS-e-NOMEM, insufficient memory (which pretty descriptive). This file has grown to about 872MB.

Any suggestions on how to a.) open the model, b.) fine tune it to reduce it's size c.) or anything that could help aleviate this issue.

Thanks.
 
Here's a solution to A) opening the model

Do you have a .mdl file saved as a backup?

If so, reboot the server.

Go into your temp directories that you build your cubes in. Either your batch script that builds the cubes specifies this, or you can see this in files/preferences from transformer.

Delete all of the temp files from these directories (normally this means all files)

Open the .mdl file. Save as a pyi.

You have now restored your model.
 
Thank you for the reply.

I do not have an .mdl for that transformer file (you can bet I will next time).

Any alternatives? Also, what is causing this so I may avoid future issues?
 
You should always save an mdl file of any model that you're working on. They make excellent backups.

Do you have a log file? It may hold a clue to your crash. How many rows of data are you processing.

By default it should write to your program files/cognos/cer2/bin/ directory.

It might be as simple as you ran out of disk space.
 
The pyi file is 872MB and according to the log file there are 7459 rows and 1714844 categories.

The machine has an available 20GB hard disk and 4GB of RAM.

We moved it to another machine and recieved the same error.

I really appreciate your assistance thus far - thank you.
 
Do you have one disk drive or two and what is the size respectively ? :->
 
2 drives:

c: 16.9 GB - 3.18 GB available
e: 67.7 GB - 17.7 GB available
 
The fact remains that it seems like a memory problem and this is where I would start my investigation. I would begin to clean my Temp folder from anything which is in there since all of this will be created again. I would then clean my Recycle Bin and do a reboot before I run the Cube again. I would even go so far as to do a disk defrag on both disks. Hope this helps.

Regards :->
 
When you work with PYI files , the file is growing.
From time to time you save it as MDL , then reopen the mdl , and save it again to pyi , the new pyi file will be smaller.
Saving as mdl will compact the pyi.
 
Well, this is what I get from our system adminstrator...

"I find this hard to believe ! This is on our largest server . It has 4 XENON 1.2 Processors. This machine has 2GB of RAM + 4 GB of virtual RAM ( Paging File ) --- THIS IS HUGE AMOUNT OF RAM!

Is it possible for the cube to generate in excess of the formentioned specs?
 
You have to remember that Transformer is using also a temporary folder , where temporary files are created during the cube creation process.
You can specify your temporary folder in the Trnsfrmr.ini or as parameters when starting the cube creation.
If you don't have enough temporary hard disk space the cube creation will crash.

You can also specify how much Memory ( RAM ) you want Transformer to use for the Reading and sorting the temporary files.

In Unix, there is some limitations, that need to be changed by Root , to give more memory to Transformer process.
 
I think your model is generating a temp files in 'C' directory which ran out of space.

I would like to give some perspective of storage/memory statistics.

I have a cube with pyi size 50MB and takes 70%RAM of machine and 1.89 GB hard drive space during its creation. Now you can think of what kind of hard drive spaces your cube needs (50 MB vs 900MB).

As draoued said you can control the use of RAM but that increases time of build.

To answer your question how to open that model. when it failed through scheduler it should have generated a suspended model. Open the transformer and view for suspended model if you find one open it and close it. This makes the pyi file readable. If that is not happening, in your temp folder your scheduler might have generated xxxx.pui file and xxx.lck files clear them restart the machine and open the model. it should work.

Hope this helps...
 
There are limits on much much box you can throw at Transformer.

Typically multiple processors are used to read data into the model (ensure enable multi-processing is checked on the properties of your data sources with >100000 records)

The physical cube build only uses one processor. There's an internal cognos benchmarking document that demonstrates that by using only one processor it goes faster than using mulitple processors.

For your temp directories you should always use two, one for temp model space, and another for temp work space. Ensure that these are pointed at your larger drive. If you use one directory, the build tends to be slower

In one of your .ini files you can set the transformer read and write cache sizes. I believe the maximums are 32MB write, and 128MB read. I think its the cognos.ini or trnsfrmr.ini files.

A write cache larger than 32MB is inefficient, because of the way that written objects are stored.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top