Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Mercator system design tips required 1

Status
Not open for further replies.

mlapse

IS-IT--Management
Jun 30, 2005
82
US
Hi,

While designing a new process what would be the correct way to implement out of the two options:

1) Use a separate system for each trading partner who needs to receive the data. This means that there would be many mercator system files one for each partner which would result in a number of watches in the system.

2) Use a router map to call run maps for each partner's customization. This would reduce the number of watches but at the same time, the error message handling, config settings etc for each partner would make the implementation more complex.

3) Use a combo of the two above. Have a separate router map for each type of message and then call the individual run maps for diff. partners. Again the watches would not be too many but error handling etc would be difficult.

Any other ideas would be appreciated.
 
Too many variables and unknowns to answer this on a forum but option 3 is common.
A standard method of error logging and reporting is the foundation for all your future development and will save alot of support time and development time in the future.
 
Generally speaking we use a combination. This process is based data being delivered to the DSTX server.

-inbound
--x12 (system)
---837-cntl (event map)
-----run-map-1
-----run-map-2
-----....
---834-cntl (event map)
-----run-map-1
-----run-map-2
-----....
---820-cntl (event map)
-----run-map-1
-----run-map-2
-----....
---27X-cntl (event map)
-----run-map-1
-----run-map-2
-----....
---....
--other (system)
---other-cntl-1 (event map)
-----run-map-1
-----run-map-2
-----....
---other-cntl-2 (event map)
-----run-map-1
-----run-map-2
-----....
---other-cntl-3 (event map)
-----run-map-1
-----run-map-2
-----....
---....

-outbound
--x12 (system)
---837-cntl (event map)
-----...
---834-cntl (event map)
-----...
---820-cntl (event map)
-----...
---27X-cntl (event map)
---....
--other (system)
---other-cntl-1 (event map)
-----...
---other-cntl-2 (event map)
-----...
---other-cntl-3 (event map)
-----...
---....

-realtime-query-resp
--27X (system)
---Query-cntl (event map)
-----... (runmap)


--log-file-map (generic for all maps)
--email-map (generic for all maps)

We use similar processes for inbound and outbound edi files. Also try to reuse as much code as possible. To that end use standardize map-sets based on data type not on end user (trading partner). Map-sets reference configuration files/tables to build and deliver output.

There are tons of ways to do this. It really depends on what will work best for you.

Keep in mind that in more recent versions of DSTX, 7.5.* and later, the more system files deployed the more resources consumed.

Not sure that helps you.
 
Eyetry,

This is really very helpful. Thank you very much! This architecture seems to be the ideal.

The only thing I need to know from the above is your X12 seems to be a system and there are many event maps in that based on each process/message id.

If I get this correct you are adding many process flows i.e. many event maps to the same system right? So the number of watches is the same as when you use one system per event map?

Also how do you handle validation and error reporting in this architecture? Do you use Commerce Manager? Or do you just validate and then pass on the message to the application maps(which do the transformation)?

Again thanks a lot for this post.



 
Eyetry,

Also do you use a splitter prior to your X12 system in order to determine which message it is so that you can feed it into the event maps and the subsequent run maps.


Thanks
 
Ummmmm...... Well......

Actually, we have had several versions of this process defined above. I'm working with our developement team to so that if the case below the X12 is the master 'msd' file deployed to production. Developers would create a unique msd per common event map type. So,

-inbound
--x12 (system)
---837-cntl (event map)
-----run-map-1
-----run-map-2
-----....
--other-claim-events (system)
---other-cntl-1 (event map)
-----run-map-1
-----run-map-2
-----....

Would look like.....

-inbound
--x12 (system)
---837-cntl (event map)
-----run-map-1
-----run-map-2
-----....
--other-claim-events (system)
---other-cntl-1 (event map)
-----run-map-1
-----run-map-2
-----....

Would become something like....

EDI Map definition file
--claims system
---837-cntl (event)
-----runmaps
---other-claim-related (event(s))
-----runmaps
--Enrollment system
---834-cntl (event)
-----runmaps
---other-related-maps (event(s))
-----runmaps


The new way should allow our system admin to grab existing system into the master msd file and perfor one main deployment. It's not the best way of deploying apps but is a step in the right direction and will address some concerns related to security, consistancy and developer access rights.

Yes, 'many event maps to the same system right'. You do end up with the same number of watches. Also, in the newer versions of DSTX each system file deployed consumes resources, may as much as 10-15mb or more. So after you deploy 30 watches you'll have used as much as 300+ MB of memory. Deploying 30 in one system file will consume a relatively small percentage of that 300+MB. In addition, in the newer versions like '8' you can configure the server so that if one systems GPFs it won't bring everything else down with it. There's a down side to consolidating to many
events though.

As far as validation... Once a new dataset is sent to production we don't bother using a validation map on it. And, don't allow new data/events to go to production until we have some confidence in its quality. At that point the non-x12 data will either run or fail. If it fails due to a validation error the data is run thru a validation map on our test server or a validation map was created on error. EDI-X12 data goes through Commerce Manager which creates reject, log and acknowledgements as appropriate. I have a non-Cmgr system that looks for Cmgr events and notifies the appropriate people based on results. Data issues usually go to BAs to resolve. Log and other issues go to development resources.

Other maps are configured to create log files in a specific directory. An event watches that path, examine the log file and records results. If it notes an issue a message is sent to the appropriate staff for further investigation.

Our future world will log all results to a DB so users can see the status of their data, operators can respond to issues faster and we can minimze our current dependency on emails. The status will be reflected in a web page....

Hope that helps!

Keep in mind that this is what we are doing. It isn't probably the best setup and it seems like I'm constantly trying to improve things. When we set this up we had no idea what we were doing.

Sorry, all, for the long winded diatribe.
 
Eyetry,

Thanks again. I was also in the process of re-designing a system and like the path you've chosen.

Our method, prior to this re-design, has been quite haphazard. We have used one system for std. X12 sata which consists of a router map. This map takes in a file containing an interchange and for each ST/SE calls the run maps. It also check the return-code and in case its not 0, the map goes into the audit logs of the run maps and then extracts info and stores to a DB. The same map also generates acknowledgements.

Seems like a lot of work for just one map. Presently we have a low volume of data but when the ramp-up happens, I dont think this would be a great architecture to meet the requirements. What do you think?

For flat files, we have a new system per partner. The same is done with custom X12 (which is X12 data that either needs a separate system, somthing the developer decides. Each flat file map does all the things the main X12 router does.

The entire process is repeated for all EDIFACT data.

Seems like a system which will need frequent hardware updates if the architecture is to be kept and the volume increases.

I wanted to split the ack process and the error handling process off from the main maps and also add Comm. Manager to handle validations.

Does this seem better than our current design?

Thanks again for your help.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top