Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Using START TRANSACTIOM, COMMIT and ROLLBACK 1

Status
Not open for further replies.

n2dp

Programmer
Sep 5, 2003
5
US
I am currently having a problem where I have a process that merges account information from one account to another. My problem is if the process gets interrupted I have some files that have already been updated and some that have not. Then we do some other things to get everything back to the way it was before the process began. I'm trying to find out any info (good or bad) on using Rollback and commit commands as a solution.

Any ideas or direction would be very helpful.

thanks,

Mike G.
 
There are solutions for this.

If you would tell us about your environment (platform, OS, database) we can help.

Dimandja
 
Dimandja,

Our application is written in AcuCOBOL, we use a Vison file system with HP Unix.

So am I in the right ballpark as far as the COMMIT and ROLLBACK comands are concerned. Plus what kind of overhead would we have.....

thanks in advance

Mike G.
 
Hi Dimandja,

I did mean Vision file system. I am really not asking a question about syntax but more about what other developers think about using those commands. We have never used those commands and the skeptics came out of the closet at my suggestion of using COMMIT and ROLLBACK. Will it hurt performance, what happens if the COMMIT errors will everything still get rolled back to a consistent state.
Those are the kinda of answers I need to know. If I was not clear enough the first time I apologize.

Once again thanks for any input.
 
Hi Mike,

I have used transaction processing for about 20 years with complete satisfaction.

You will probably need to clearly identify your transactions - by inserting BEGIN, COMMIT and ROLLBACK where appropriate. The activities between BEGIN and COMMIT constitute a logical transaction. When ROLLBACK is used, a whole transaction as a unit will be backed out.

With transaction processing, interrupted transactions are automatically (or explicitly) backed out.

Yes, there is of course an overhead associated with all this. The overhead is in processing units and possibly additional hardware.

In online (interactive) programs, processing overhead is negligible - you won't notice it.

In batch mode, you will need to create larger transactions to reduce the overhead. For example, COMMIT after processing 200 master records (or transactions). The disk process is fast and will commit those 200 transactions in roughly the time it takes your program to process a couple of transactions. Resulting in a comfortably acceptable overhead.

You may also want to keep a running log of your transactions off-line (disk or tape). This allows you to backtrack (ROLLBACK) days, weeks or even years.

The benefits of transaction processing outweigh the new overhead by a very big margin.

Dimandja
 
Dimandja,

Thanks for all of the help and information. This is exactly what I needed to know.
 
Mike,

First, I'm glad I chased you over to this Forum, where you received the type of response I expected.

Dimandja has provided good advice. I am not very familiar with the Vision file system, especially with its implementation of transaction support. Therefore take the following as potential concerns only.

I am a bit concerned that your initial problem statement said, "some files ... have already been updated and some that [are] not." If you try to encapsulate transactions that contain a very large number of records, one thing you might want to watch is the effect on process memory. The very nature of transaction processing is to (1) 'remember' what the file structures contained at the time of the BEGIN, (2) 'remember' what changes have to be made to the file between the BEGIN and the COMMIT, and (3) do the changes in a way that is invisible until all changes have been made to the physical store. All this 'remembering' might use more process memory than you are accustomed to.

The statement might also indicate a potential processing flaw. If your merge process updates several files, and does it one file at a time, then, unless you bracket the entire set of updates for all files in a single transaction, adding transactions will accomplish nothing. If your merge logic updates all files for each related set of changes (e.g. for a single bank payment, all files are updated to reflect that bank payment before another payment is merged), then bracketing each related set of changes in a transaction sould work nicely.

I hope I haven't added too much confusion to the issue. Just wanted you to be aware that there might be side effects in your presently stable processing environment.

Tom Morrison
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top