Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

100Mb transaction generates 1.6Gb of Redo Logs? 1

Status
Not open for further replies.

rtefft

Programmer
Aug 23, 2001
70
US
I have read several of the excellent articles here by SantaMufasa on Redo Logs. Having said that, I am still left with a problem. Here's the rough outline of what goes on:

We need to temporarily store several megabytes (2-50) of data in a table for the single purpose of passing the data from the database to external software. (it cannot handle PL/SQL tables/arrays). We have been trying to minimize the impact of this on the redo logs without much success. Here's the setup, and it is all performed as a single transaction:

External Program (on webserver):
Begin Transaction
Calls PL/SQL Package to load result set into temp table
Select result set and write to file on webserver
End Transaction (ROLLBACK)

I now understand why the Redo Logs may see this data (usually only when the Log Buffer can't hold all the changes I think). However, when we write/rollback 100Mb of data, we end up cycling through 1.6Gb of redo logs (16 x 100Mb)! We performed this in a stand-alone environment to ensure no other transactions where occurring.

My final questions:

1. Is there any way to prevent/minimize this transaction from hitting the redo logs? We don't need to see this in recovery. (I think the answer here is "no").

2. Why would loading 100Mb of data generate 1.6Gb of redo logs?

Thanks in advance for any help/information,
Rich


____________________________
Rich Tefft
PL/SQL Programmer
 
Rich,

I have a slight impression that there may be some confusion concerning "redo logs" and "rollback segments" for your transaction. Here is why I have this impression: You mentioned, "...when we write/rollback 100Mb of data, we end up cycling through 1.6Gb of redo logs." Oracle does not cycle through redo logs during a rollback. Oracle never reads through redo logs unless a database recovery is occurring. If a ROLLBACK occurs, then Oracle does read back through the rollback segment for the transaction to fallback to the data images as they appeared prior to any changes that occurred during the transaction.

There are methods of avoiding both redo entries and rolling back rollback entries for your environment. Here is an example of avoiding redo logging:
Code:
create table temp100 as select * from s_emp nologging;

Table created.

Now to your second issue...avoiding rollback processing upon ROLLBACK. If you are rolling back the entries in the temporary table, why not just DROP the table? If you DROP the table, there is no ROLLBACK processing.

Let us know if any of this seems helpful or if you have followup questions.

[santa]Mufasa
(aka Dave of Sandy, Utah, USA)
@ 22:35 (04Jan05) UTC (aka "GMT" and "Zulu"),
@ 15:35 (04Jan05) Mountain Time

Click here to Donate to Tsunami Relief. 100% of your contributions here go to the victims...0% to administration.
They were "The First-Responder" to the disaster, with relief deliveries arriving before Red Cross and U.S. aid.
 
Santa,
Thanks for the response. We can't use the CREATE TABLE method as the table is permanent, and is used for our data-transport by multiple concurrent users.

As I understand it, all transactions are written to the Redo logs - even before the final commit/rollback is decided. There is a marker written to the log to indicate if the transaction was commited or rolled back. We cannot avoid hitting the redo logs, but by using a rollback instead of a DELETE we can at least avoid a double-hit when cleaning up the temp data.

Our rollback segments can handle the loads since they spike up and down according to usage. We're just drowing in archived redo logs (and need archiving on).

As for the 1.6Gb, I have suspicions there were other things going on in the DB our DBA didn't know of, like testing data loading scripts.

I will keep investigating and let you know if I find anything further.

Thanks again.

____________________________
Rich Tefft
PL/SQL Programmer
 
Rich,

Actually, ROLLBACK is extremely costly, virtually as costly as a DELETE, and here is why:
Code:
DELETE:   1) Writes row data to rollback segment.
          2) Writes a "Delete-byte" to table segment.

ROLLBACK: 1) Reads pre-change row data from rollback segment.
          2) Writes pre-change row data to table segment.

If, in fact, you are emptying out the temp data table as your cleanup, then you can have the best of all worlds (i.e, retain the permanent "temp data" table, but avoid the costs of DELETE or ROLLBACK) if you:
Code:
TRUNCATE TABLE <temp-data table name>;

A truncate table uses no rollback and does not do any "row-by-row" processing. The other advantage is that the TRUNCATE releases all data blocks for re-use, thus eliminating fragmentation for that table.

Let us know your thoughts.

[santa]Mufasa
(aka Dave of Sandy, Utah, USA)
@ 18:14 (05Jan05) UTC (aka "GMT" and "Zulu"),
@ 11:14 (05Jan05) Mountain Time

Click here to Donate to Tsunami Relief. 100% of your contributions here go to the victims...0% to administration.
They were "The First-Responder" to the disaster, with relief deliveries arriving before Red Cross and U.S. aid.
 
Santa,

Thanks for the clarification on the cost of a rollback. (yikes!) As for the TRUNCATE, we cannot as the table may have data in it from other users at the time (see first part of my 2nd message). Perhaps we can leave the temp data in there and use an after-hours TRUNCATE (though that table would get huge).

Thanks, and stay tuned...

____________________________
Rich Tefft
PL/SQL Programmer
 
Yes, Rich, if there is a time slice during which users do not depend on the "temp data" table, also during which you can truncate the table, you will be far ahead in terms of CPU cycles and performance. Although the cost will be a larger-than-otherwise "temp data" table, if you can bear the cost of a larger table, that should be less expensive than the higher costs of ROLLBACK or DELETE poor performance.

Cheers,

[santa]Mufasa
(aka Dave of Sandy, Utah, USA)
@ 21:45 (05Jan05) UTC (aka "GMT" and "Zulu"),
@ 14:45 (05Jan05) Mountain Time

Click here to Donate to Tsunami Relief. 100% of your contributions here go to the victims...0% to administration.
They were "The First-Responder" to the disaster, with relief deliveries arriving before Red Cross and U.S. aid.
 
I think we found just what we were looking for:
Code:
CREATE GLOBAL TEMPORARY TABLE TEST_TABLE
  ...
ON COMMIT DELETE ROWS;

This creates a table (should go in your Temporary space) which keeps it's data only through the current transaction. None of the table I/O is put into the RB segs or redo logs. It can have indexes and triggers, but no referential integrity.

Rich

____________________________
Rich Tefft
PL/SQL Programmer
 
Rich,

If that takes care of business for you, then congratulations and well done.

Cheers,

[santa]Mufasa
(aka Dave of Sandy, Utah, USA)
@ 16:41 (07Jan05) UTC (aka "GMT" and "Zulu"),
@ 09:41 (07Jan05) Mountain Time

Click here to Donate to Tsunami Relief. 100% of your contributions here go to the victims...0% to administration.
They were "The First-Responder" to the disaster, with relief deliveries arriving before Red Cross and U.S. aid.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top