Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

ORA-01555: snapshot too old - Problem with Rollback Segments? 15

Status
Not open for further replies.

Spikmeister

Programmer
Jul 30, 2003
15
GB
I'm using Oracle 8i and doing a data load of almost 800000 records. I then need to update all these records after doing some complex searching on other tables.

However I keep getting this error (normally only a couple of hundred records short as well!):
ORA-01555: snapshot too old: rollback segment number 4 with name "R03" too small

Our DBA has increased one rollback segment (R07) to be very large and I've tried using "execute immediate 'set transaction use rollback segment R07';" within the procedure but it still appears to be using the wrong rollback segment and erroring.

Any ideas on how I can stop it from erroring?
 
Hi TheShowMeCanuck

Don’t kill the message-boy because you don’t like the message.

Try to go through all the material on AskTom website ( at Oracle – you can’t find any information about the solution.

I don’t know your system so I can’t give you a solution. However I can say this:

I understand you – what do you do then the manager is yelling and walking around with a shotgun looking after the guilty developer/dba? – You remove the problem FAST – however you did not solve the problem and you can (properly) optimize the database.

In the long run you might get more problems (it’s like turning a pyramid upside down). Then you build more and more into the database, the problems gets worse. Then your RBS growth a lot (or you use very large RBS) then remember that Oracle use these RBS for read consistency – so Oracle get slower then RBS is full with “before rows”.

My point is, you must make the decision if you want to continue or redesign. As I see it, you have lots of data and rows – fine. If I understand you reply your script only return insignificant numbers of rows of the total numbers of rows.

Why not change the behavior of the script. Why not make it easier for the script to find/build the rows. You can set a flag on interesting rows (maybe using bitmap index), you can save primary key info to a special table then rows are changed). Use materialized views or you can instead of using one large session build an automatic job (DMBS_JOB) to periodic pre-update/find/do the whole script and reduce long running scripts to a minimum (I have done it with great success) or try if you can make the script works in parallel. You can do several thinks but try to avoid crossing committed rows.

Maybe you problem is not the script.

The idea of using temporary tables (it’s not necessary temporary tables – but just for this script) you avoid crossing committed rows so Oracle avoids using RBS. The advance is that you can do lots of stuff on the temp tables with commit – and not long running transactions. The idea of breaking the script down with commit gives you a change to see status and so on from a “manager” console (other session) and following the script and you do not generate lots of RBS. Then I am doing this coding I am also building error-table for bugs. I avoid using FETCH (cursor) and try hard to use pure DML statements. Then every thing is acceptable few massive update/insert/merge statements update normal tables. Sometimes the customer needs to install more disks in san because we else run out of space – but the alternative is ….

I try hard to avoid the problem than to live with the problem.


Regards
Allan
 
Allan, I've definitely found that solution either on asktom or on ixora, don't remember exactly. Both sites are known for their expert advices and as far as I remember this solution has been provided "AS IS" without official support though with similar explanation.
In some cases we have no control over application data: no oportunity to issue ANY DDL. This is a precondition for almost any purchased application. Using temporary storage may obviously break down consistency. Thus can you provide a better way with such initial conditions? Doubt. I also use this trick and beleive me the reason I do it is not an ignorance.
I'm sure you're spoiled by being a god of your database, try to become a servant of one developed by somebody else :)


Regards, Dima
 
Hi Dima

You can’t find the solution as part of Oracle posted tips. Maybe on OTN in forums like this forum but Oracle do not accept the method as a solution.

But I agree it’s often that you can’t change stuff – not because you don’t want – but for some other reasons like politic, 3. part vendor or other reasons.

If you take over a system from someone – then I guess you plan changes.



Regards
Allan
Icq: 346225948
 
Allan, especially for you from beloved AskTom:


o frequent commits slow you down
o probably break your data integrity
o is harder to code


...Still can not find that post, it was more than 3 years ago (found an old script using it) :-(

Regards, Dima
 
Hi Dima

Well I did properly not express myself clearly

I agree on your last reply. But it’s not what I am writing. I break down the commit by using temp table(s). It’s almost like a chapter in a book. After each chapter I commit. I don’t need to commit then the whole book is finished. I can actually create some overview status between chapters if I like. If status is ok then continue. If of some reason the session is stopped – then nothing is wrong. I can ether continue from chapter or truncate/drop working tables and start all over. Then I am finish with the book, I generate (few) DML statements so result moves from temp tables into normal tables. It’s my idea of temp table(s). Once I did use this idea with DMBS_JOB and the script is activated each 10 minute instead of once a day and performance is very high. It’s almost like a buffer prepared and ready to fire.

But let’s make one thing clear – I am not trying to look like god or the export of exports – just trying to make my point – If you got snapshot too old problems – then something is wrong. Try to investigate the reason. You may use the good idea describe by David – but you still have some kind of problem. If you don’t try to break down problems – someday the whole system may look old and impossible to service and expensive to change.

..and now I can’t cook more on this thread.



Regards
Allan
Icq: 346225948
 
Hi Dima

I just had to try proving that I know Oracle don’t consider the above method as a legal option. And I found one answer from Thomas Kyle on Ask Tom to clarify my opinion.

ERROR at line 1:
ORA-01555: snapshot too old: rollback segment number 10 with name "RBS_SMALL"
too small
ORA-06512: at line 9


.... Cut ....
.... Cut ....

In almost 15 years, I've never once hit this in a properly sized system. I've
fixed lots of systems that hit this (by sizing RBS correctly for them).
Rollback is not something to skimp on or undersize.


It was posted 2 jul 2001 on Ask Tom.

Now I am not an expert like Thomas Kyte – but I have never had snapshot too old as a permanent problem – but then again I have only visited a few sites with this problem.

Regards
Allan
Icq: 346225948
 
Many valid points have been presented here. Perhaps you should "agree to disagree". Every environment is unique and what works for one may not be the best solution for another.

[sup]Beware of false knowledge; it is more dangerous than ignorance.[/sup][sup] ~George Bernard Shaw[/sup]
Consultant/Custom Forms & PL/SQL - Oracle 8.1.7 - Windows 2000
 
Hi Allan
I think you've probably read his recipes: they either guarantee reducing the likelihood of such errors or are based on an the exact preliminary knowledge of the requirements. I may only envy you if this is always applicable for your situation. Just assume that others may need either more reliable or less expensive solution and shift your attitude a bit. When you have nothing to suggest (besides probably some truisms) - don't prevent others to do it.

..and now I can’t cook more on this thread.

Regards, Dima
 
Allan (AOLEURO),

I have some issues with your ideas on this thread. Here is some background on my current experience on this topic:

Our shop conducts nightly exports of our schemas. I was experiencing ORA-01555 errors on our nightly backups. I implemented SantaMufasa's prime suggestion, and my problems disappeared. Our databases each have 6 public rollback segments with optimal sizes of 10MB. So, our rollback segment tablespace usually consumed about 60MB of space, but could grow to 4GB total, if necessary, without error.

After you criticized Santa's solution and suggested that "If you got snapshot too old problems – then something is wrong," I set about trying to achieve the ideal environment that you suggest. I even got approval from our IT manager to allocate 2GB each for our 6 rollback segments (increasing our typical consumption from 60MB to 12GB...a 20,000% increase) to provide the environment (very large rollback segments) that your posts imply.

We are doing our nightly backups when virtually no one else is using the system. We are doing SIMPLE Oracle schema exports (using Oracle's code over which we have no control except to execute the code). So I believe we have done all that we can do in this situation to eliminate anything that you may call "wrong". YET WE ARE AGAIN RECEIVING ORA-01555 ERRORS DURING OUR NIGHTLY ORACLE EXPORTS !!!

So what do you suggest is still "wrong"? What can we do to get rid of ORA-01555 (besides reverting back to SantaMufasa's working solution)? If we cannot fix this problem using your suggestions, I would say a retraction of your earlier criticisms is in order.
 
DeepDiverBecca,
the problem you have sometimes happened also to me, but is really strange thata the Santa's solution works for this.
If I understood the scenario as Santa explained, the problem was in the need of a long running application of some data which were in someone else rollback segs.
If you haven't any nite transaction you should not have that scenario.
If the backup is the only transaction, I suggest to put ONLINE a big RBS and put OFFLINE the others then launch the export which is basically a big big SELECT
 
Hi DeepDiverBecca

Thanks for your reply.

I must start this reply by giving you the credit for trying to solve the problem. It’s was the biggest point of my replies in this thread.

And as I now realize – people in this thread almost hate my replies. I can’t (and it’s not my point) learn old DBAs new tricks (and I do not like to go after the man instead of the ball). I was afraid that new (newbie’s) might believe that Dave’s solution would make all problems vanish from the database – and that the solution can be implemented in scripts and code.

I have meet people who thinks that Oracle 9i with Undo tablespace has eliminated the ora-1550 and Oracle has accepted ora-1550 as a bug. However the true is, that Oracle with Undo tablespaces has taking the complete control of RBS and internal (under the cover) in Undo tablespaces – it still just RBS. Oracle controls the numbers and equal size of RBS and the only control you got is retention period and size of the Undo tablespace so Oracle minimize the problem.

If ora-1550 is a big problem – then Oracle would change it – and Oracle would make more massive documentation so people could avoid it. However Oracle have a functionality no other RDBMS system have as far as I know – Oracle can make multi-version consistent read so your database can scale more – much more. However you need to know all about you application and database as a system. Why would Oracle not make Dave’s solution a part of the suggestions as they do on lots of other issues? I think it’s because we are looking in the wrong place of the problem. The ora-1550 is a symptom not a problem.

I have meet people who thinks that Oracle 9i with Undo tablespace has eliminated the ora-1550 and Oracle has accepted ora-1550 as a bug. However the true is, that Oracle with Undo tablespaces has taking the complete control of RBS and internal (under the cover) in Undo tablespaces – it still just RBS. Oracle controls the numbers and equal size of RBS and the only control you got is retention period and size of the Undo tablespace.

Try to read the following thread on Ask Tom
It lists some possibilities and also explains that ORA-1550 also is possible because of DELAYED BLOCK CLEANOUT (often a problem instead of read consistency). And if your OPTIMAL is very wrong, then it will have an impact (try to remove it).

The numbers of RBS I would use is (concurrent transactions / xx). XX is low if long running transactions and high on short single DML transactions/statements. 6 RBS sound small – try increase to 12 or 16 (I have no idea how many sessions and concurrent transactions you are running). And you would use equal size RBS and remove OPTIMAL or be careful because it drop your read consistent information in RBS.



Regards
Allan
Icq: 346225948
 
AOLEURO believe me, nobody hates you for yuou posts.
I found some of them very valuable, but sometimes I like to skip over the theory and I like solutions like the Dave's one.
As i said in a previous statement, you're correct in the most of cases for what concerning the poor design of the majority of DBs ... but sometimes they are vey well designed ... and ORA-01555 happens
 
Just as an update, rather than trying to get round this problem I have simply set up an error log and if my procedure errors due to the rollback segment then it picks up from where it left off... So far this has proved effective...
 
Hi there guys,
Phew just gone through this post. Does any one know if this 9i Undo tablespace stuff foes stop ORA 01555 from occurring ???
 
No, it doesn't.

In the Oracle 9i New Features course notes, there is a rather explicit bullet on one of the slides:

- ORA-1555 is still possible if an undo tablespace is too small compared to the retention time.

AUM takes care of a lot of tasks normally associated with rollback segments, including creating new undo segments when needed, stealing unused extents from other undo segments and allocating them to an undo segment that is running out of room, as well as attempting to preserve rollback information for a given amount of time (as mentioned in the bullet). By setting undo_retention, you can force Oracle to try to preserve a given amount of undo information (default = 30 seconds). Of course, if you set it too high, then the system will be forced to overwrite older undo information and you can still get the ORA-1555.

1051 MST
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top