Bocaburger is spot on, under the 7.5.1 release DataStage parallel jobs can use the DataStage TX Map stage, it has options such as Target Map Directory, Create map from input and output links, select map and map files etc. A DataStage job can then receive or load flattened data to and from this...
It helps if you can schedule the jobs of both tools through the one scheduling tool, this lets you build some interdependence between the two loads. For example ETL tool 1 loads inventory tables and ETL tool 2 has to wait for those to finish before running purchasing loads. Automatic...
I've had a problem with the server edition aggregation stage in a much older release. In one project I abandoned it entirely and pushed my data into universe tables and used group by select statements from those tables. It may be that on unsorted data the aggregation stage has to wait for...
If you are talking about job documentation, the best output is a html report with a context sensitive bitmap of the job and html job properties. This can be generated with a batch script on your client PC to repetitively call up the DataStage Designer, a sample script has been uploaded to...
This is a large topic and you are well advised to search through the forum archives for threads on the options discussed below. DataStage Hawk combines the DataStage and MetaStage repositories and products into one so most of these options may be redundant in that release.
Process metadata...
If your statement is that big you may be better off moving it into a routine where it can be properly tested for all combinations. It will also let you switch to a case statement, which may be easier to manage.
regards
Vincent
An Expert's Guide to WebSphere Information Integration...
A transformer can turn a single input row into multiple output rows by putting end of line characters into a field, eg. char(10). That way the routine can return back to the transformer a text that contains an array of values with char(10) between the records. The transformer would output it...
DataStage does support dynamic link libraries for transformation routines though they are rarely used. The suite is moving towards SOA, as are Microsoft, so you will see increasing interaction between real time DataStage services and the Office Suite.
DataStage can read from or write to Excel...
Does your DataStage login user have permissions to the QualityStage project folders? The QualityStage plugin is the best way to run a QualityStage job as it lets DataStage input and output data to QualityStage jobs. You can try to run QualityStage jobs as shell executes from a DataStage...
Any third party scheduling tool, such as JobMaster, calls a DataStage job via a script that runs the dsjob program. Have a look at the Server Job Developers Guide for the section "Command Line Interface". It provides the documentation for running DataStage jobs from the command line or from a...
You can use the basic routines in a BASIC transformer in a parallel job. If you don't see the BASIC transformer in your list of stage shortcuts then look for it in the repository window by displaying stage types and browsing through the parallel stage types. This lets move the functionality of...
You can do this in an operating system script, such as a Unix shell script, DataStage would pass the script the date to be checked and the script would return a success or failure status that tells DataStage whether to proceed.
If you wanted to code it inside DataStage you could pass the date...
With Oracle I find it easy to run the DSExecute command from a DataStage routine to execute a SQLPLUS command. You build the command by calling sqlplus followed by the sql to be executed. It can then be called up from Sequence jobs and you can put the command status and command output into the...
The hash file stage only accepts exact matches to the key fields, making a < or between clause impossible. You can read the hash file using a Universe database stage instead as a hash file is a Universe table. The Universe stage does accept < clauses. Have Search the forum at...
Usually means your input stage is trying to read a row but has already used up all the characters in that row before all the columns have been filled. Your fixed width lengths are out of whack.
As for the blank row problem the easiest way to handle it is to get rid of the blank rows! Can...
We put all jobs belonging to the same workflow into a single Category folder within the DataStage repository. We then use the ETLSTATS package to generate operational metadata on that folder after the workflow has been run. This is a set of passive metadata collection jobs that retrieve stats...
Use a standard Microsoft Excel ODBC driver. When importing the metadata into DataStage make sure the "System Tables" check box is checked as the driver reports each worksheet as a system table.
Another option is to output the Excel file into a delimited format file.
Good question. There is no official tool that will do it and the two products have complete different repositories. I think there is an unofficial tool that Ascential developed when it migrated a large number of PeopleSoft EPM clients across from Informatica to DataStage Server Edition. They...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.