Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

multiple files processing

Status
Not open for further replies.

jinkys

Programmer
Sep 11, 2003
67
0
0
GB
What would be the best way to read all the files in a directory at once and process a summary into one single file. I am currently using a source event but this runs to slow processing each file on its own which means a large backlog as its taking approx 0.5 sec per map run.
Any one have any suggestions?
 
What environment are you running?

Its a combination of IFD settings and map configuration.

If you are going to multithread you'll need to make sure that all output files have unique names. Otherwise, you may run into problems where instance A hasn't relinquished control of it's output when instance B needs it. This includes non-unique log and trace file names. Also, in the IFD the workfile prefixes must be 'unique', skip if busy must be 'no' and backups must be off. There may be something I missing but I'm at home and working from mem. Let me know if this doesn't work.

FYI: I ran some batch and realtime tests, multithreading input. I used small files, larger files and a combination. The results were that multithreading took longer than sequential processing.

One of the things that may have negatively affected my testing was that the service is running on a 1 CPU machine.

I'll be interested in hearing how it goes.

eyetry
 
Thanks for your suggestions eyetry.
I'm running on a UNIX env.
Not sure its going to work using multithreading as I am writing to a single summary output file.
 

You'll probably gonna run in to the situation
where one map keeps the file open while another tries to
write to it and will go into pending if its locked.
What you could do to is to create a run map
that writes to the file and then just call this from
the first map. this way you could stack up "write" maps while the first maps multi thread...


 
If you're running 6.7 you can use the Management Console to see what Mercator is doing and detect any resource conflicts.
 
You'll need to issue a unix LS command via a script (which you can run from the input card)
This will give you a list of the files in the directory which you can then access using the GET("FILE"....) command.

Have also seen this done using a Mercator FTP command with the option to list the directory contents. It will also return a list to the map.

In both cases you would need a fairly simple tree to recognise the different rows in the list

Tim
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top