Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Mercator Performance Tuning 2

Status
Not open for further replies.

MercDb2

Programmer
Jan 9, 2003
1
0
0
US
We are running maps on the OS/390 and seeing some performance hits. Any tips or tricks on how to tune Mercator?

Our input is large datasets where we are performing simple address transformation.

 
There are a few things to try;
- Keep your input type trees simple. Although for future proofing or correctness you may want your type trees to reflect your data structure, it is more efficient to lump things that you are mapping 'as is' from input to output in a single object. Switch on input tracing in the design studio, and see how many objects you are actually creating.
- In production, keep your logging to a minimum, especially map tracing. We normally just have execution audit, and error only for adapter trace.
- Try using Burst mode instead of integral mode. Integral mode reads all the input data before processing, which for large data sets could involve alot of cacheing. Experiment with different sizes of fetch unit, and work memory page sizes to optimize throughput.
- Avoid using PUT and GET statements as they are less efficient. If you need dynamic Source/Target settings then use map calls (using a RUN statement and overrides) or the resource registry.

Hope this helps.
 
1. Disable the Trace
2. Make the Input card Type as Burst
3. Change the Map setting Work space as Memory
4. Make the Page size to 260 ... 1024
5. Page count from 500 .... 1000
6. CHnage the File adapter to Sink

Hope this helps u and let me know the improvement after chaging the settings.

Anantha
 
code a -WM when invoking your map. And also, the page size and count
of your the map settings will affect your performace specially handling
large files. This I found out using 6.5 Sp #2
 
I have not heard of any conclusive evidence that suggests the use of GET or PUT statements effects efficiency or is detrimental to performance as Willbert suggests.

Interested to hear further views on this.

Cheers
Jonbert
 
Hello Jonbert, thought I might find you here causing trouble ;-)

The efficiency issue revolves around the loading of adapters into memory. If you specify an adapter in the output card of a map then the event server can preload it. If you are using PUTs and GETs then the adapter type is specified in a string and the adapter must be loaded at runtime.
 
The input is a large dataset. It is possible to create a smaller, more simple extract?

I work on a system that reads from an Oracle database. Oracle is much faster at performing queries than Mercator. The database deveoper developed a database procedure that created an extract to a temp table. Then the map does a simple query on this one table instead of multiple tables. This limits the number of connections to the database in improves performance greatly.
 
We are currently on Mercator 6.7 Version. I have a huge 832 file and it took me about 2 hours to process the file and write it to the database. The Input file is just 60 meg. We are expecting about 140 Meg file from a trading partner. This is causing significant problems on our production environment. Some times the event server goes down and jeopardizing the production environment. I called mercator technical support and they suggested me to create an output file with just detail records . I have my Input 832 file containing about 672853 LIN Records. The LIN Loop has got 1 LIN ,1 DTM,2 PID,2 CTP Segment. ALl together I have 2124850 segments in the 832 transaction set. If any body can give me a clue on optimizing the map,it would greatly be appreciated.
 
mymercator --

There is a performance issue with Mercator when writing large amounts of data to databases. It revolves around writing new rows verses updating existing rows. Ascential's short term answer is "known issue, to buy 'Data Stage'".

Another option is to take advantage of Oracle SQL loader. Example is provided with mercator 6.7.1. Problem with this verses loading from a map is that the loader commits after each row and maps commit when when there's nothing left to process. SQLLoader cut my total processing time to about 12 - 13 minutes on larger files, ie..... multiple maps managing input files of at least 80 mb with related outputs size of at least 175mb.

eyetry

PS: I've never been able to resolve a performace issue with simply adjusting the page size/count. Its always more complicated and I get better performance by focusing on other items. I'm not saying it doesn't help but, from my limited experience, the best performance tuning usually comes elsewhere. Also, setting page size and count seems like an art not a science its a bit of messing around to find the ideal settings. The settings for 1 map may be different for another map even though they may work with files of similar sizes in a the same environment.
 
I agree with you Eyetry about your PS.

I would just add some tips I experienced :
There's often architectur solutions using multithread. A large input file could be cut into multiple small files and processed in parallel.
For your cuting map, use very simple typetree like one group containing a repetitive line delimited by the carriage return something like that.
For your processing map, make sure that Workearea option is in Memory, or if it's not possible (a lot of memory consomption), make sure that workearea file option is set to 'unique'. It prevents the map to be performed in sequential mode. Moreover, don't forget to set the "max concurence instance" up to 1 in the MIB.


Oaiusr (hope it could help you)


 
4. Make the Page size to 260 ... 1024
5. Page count from 500 .... 1000

Bad advice. Page size should a a binary multiple of 2, so 260 is not good. You really need to test page size and count, if you have a page size of 1024 and a count of 1000 (and 999 is actually the highest valid value) you would be allocating a LOT of RAM before your map ever ran. (6.7.0 and above). This was good advice prior to 1.4.2 SP7, where page sizes of 1024 were ignored and a lower value was actually used, but that was fixed years ago.
How to test: Take the most representative data set you have, run it through the map several times with various page size & count settings using a command line over-ride.

Start with a page count equal to the number of cards, and use sizes of 64, 128, 256 and 512. Get the best size and test a valure between them (like 384), then retest with page counts 2 above and below the original. Based on those results you may need to fine tune, or you will see that there is not much variation between settings, so then pick the smallest that does not degrade performance.



BocaBurger
<===========================||////////////////|0
The pen is mightier than the sword, but the sword hurts more!
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top