We are using DB2 on Mainframe as Data Warehouse. Program (loader program written in COBOL) is currently using a single flat file as input. After reading the input record, it UPDATES and INSERTS around 10 DB2 tables (most of these tables are on partition dataspace). This process is consuming lot of time.
Can you please let me know the strategy which can improve the performance. You can think from all the angle, i.e rigth from the locking perspective to spliting of job. Any input... you can even let me know about any particular link or document , thanks in advance.
Here are some of the basic things i can think off.
1. Split the input file, and run the program (loader program) simultaneously through different Jobs.
2. Split the whole process into two, read the input file and prepare output file separately for each DB2 table to be loaded. Then load the output file created by the first program into DB2 table using DB2 utility DSNUTIL.
Can you please let me know the strategy which can improve the performance. You can think from all the angle, i.e rigth from the locking perspective to spliting of job. Any input... you can even let me know about any particular link or document , thanks in advance.
Here are some of the basic things i can think off.
1. Split the input file, and run the program (loader program) simultaneously through different Jobs.
2. Split the whole process into two, read the input file and prepare output file separately for each DB2 table to be loaded. Then load the output file created by the first program into DB2 table using DB2 utility DSNUTIL.