Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chriss Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

ahhhh.ORA-08103: object no longer exists

Status
Not open for further replies.

stevenriz

IS-IT--Management
May 21, 2001
1,069
Hi everyone. Just this morning, we started experiencing the error ORA-08103: object no longer exists. Ths system has been much been static for a long time. Don't know what it could be. The table in question is in fact there and we can see it though Toad or other viewers just fine. We are trying to search for a corrupted row. Anything you can think of to diagnose/fix this short of a full restore would be appreciated. Thanks all!!
Steve

sample:
SQL> select count(*) from tablename1;
select count(*) from tablename1
*
ERROR at line 1:
ORA-08103: object no longer exists


SQL> exit
 
Hi,
Very strange..Here is the 'official' text for that error:

ORA-08103 object no longer exists

Cause: The object has been deleted by another user since the operation began.

Action: Remove references to the object.



It maybe that the corrupted rowID is causing this to happen..

[profile]

To Paraphrase:"The Help you get is proportional to the Help you give.."
 
yes that is what we think too, we think we narrowed the affected row id down to a certain range but once we go past the affected row, we get this error and it won't display the row data. weird.. There isn't any limit to the number of rows in a table is there? currently there is just over 1000000 rows...
 
Hi,
That may be due to the fact that the header of each row has the reference needed to find the next row ( If I recall my architecture classes correctly - no guarantee)
..If that header cannot be read, the data may be lost..

Try a Metalink search with that error message, or open a TAR with Oracle support - they have some recovery tools/techniques that may help ( may be costly..Is there a valid backup for that table?)

1 million rows is not big, in Oracle Land..






[profile]

To Paraphrase:"The Help you get is proportional to the Help you give.."
 
thanks Turkbear, yes we do have a recent backup but I just hate to restore if there is anything else we can do... There isn't much more help in metalink then what we've talked about here really. Still plugging tho...
 
Steven,

With the symptoms you are describing, could you check something for me?...Can you please confirm the current size of the data files in your database for me? If the file that holds blocks from the "problem table" has exceeded 2GB, then there is a distinct possibility that your operating system could be one for which 2GB+ data files have issues. (The symptom you describe is precisely what happens under the scenario that I just mentioned.)

But before we start coming up with a "cure for which there is no disease", let's have you check data file sizes first.

[santa]Mufasa
(aka Dave of Sandy, Utah, USA)
[ Providing low-cost remote Database Admin services]
Click here to join Utah Oracle Users Group on Tek-Tips if you use Oracle in Utah USA.
 
I do have a couple files over 2gb
-rw-r----- 1 oracle dba 3145736192 Apr 17 17:48 dataprod7.dbf
-rw-r----- 1 oracle dba 3145736192 Apr 17 17:48 indxprod7.dbf
-rw-r--r-- 1 oracle dba 2360877056 Apr 17 17:47 system01.dbf

They have been in place for a long time. Still might this issue crop up after a while?
 
since I don't think I mentioned it, we are running Solaris 2.6 on this box... the more I look around, the more I see files over 2gb. Thanks!
 
StevenRiz said:
They have been in place for a long time. Still might this issue crop up after a while?
The insidious and obscure thing about the 2GB+ problem is that you don't even know you have a problem unless you just happen to access data that is in a block that is at or beyond the block that just happens to be on the 2GB boundary. The access problem occurs only to blocks that are at or beyond the block on the 2GB boundary in the same object (e.g., same table/index/et cetera). So, frankly, you could go for years and not know that you have a problem if the object is obscure or if accesses just happen to occur to blocks prior to the "problem block".

Therefore, you can isolate if this is an issue for you by determining if the table with which you are having trouble happens to lie on the 2GB "threshold" of one of your 2GB+ data files. Here is a script that I wrote for you that will show the owner, name, and object_type of any object that is on a 2GB boundary in your database:
Code:
col a heading "Owner.Object (Type/File)|on 2GB Boundaries" format a50
select	 owner  ||'.'||segment_name||' ('||lower(segment_type)
		||' in file #'||file_id||')' a
	,block_id, blocks
from dba_extents
where power(2,31)/8192 between block_id and block_id+blocks
order by file_id,owner,segment_name
/
For any tables that lie on the 2GB boundary(ies), run a "SELECT" (without a "WHERE" clause) of a single numeric column from the table (which forces a full table scan). If the SELECT fails, then you know you have a problem.

Let me know of the results of running the above script and the SELECT to show if there are any problems.

[santa]Mufasa
(aka Dave of Sandy, Utah, USA)
[ Providing low-cost remote Database Admin services]
Click here to join Utah Oracle Users Group on Tek-Tips if you use Oracle in Utah USA.
 
Mufasa,

I'm not aware of our database having any problems like this, but I ran your query just for fun. It returned several tables and indexes that cross the 2GB boundary. Most of them are not important, but one was. I did a full select, and did not get any errors. Does that mean I'm safe for now, but might have problems in the future? Does this bug affect version 8.0.5.0 of Oracle?

- Dan
 
that is a good question. We had the same results as you just explained. I did find this. It might not be bad data, it may be a bad data dictionary. Do you have metalink? If not I will copy and paste for you... This is a preventative measure but might at least report what objects are missing if any. We are working on running this as we speak...


 
Dan,

Again, the "2GB+ Problem" exists only on certain versions of certain operating systems...It is not pervasive. If you have done a full table scan on the tables that cross the boundaries, then you should be fine, regardless.

But to ensure that I never run into the problem, I always create 1 or more 2GB-maximum data files for my tablespaces. There is no real downside to having multiple files in support of a tablespace, an only upsides: It is easier to manage multiple 2GB-max. datafiles than one gigantic file for a tablespace (e.g. backups, moving, et cetera).

[santa]Mufasa
(aka Dave of Sandy, Utah, USA)
[ Providing low-cost remote Database Admin services]
Click here to join Utah Oracle Users Group on Tek-Tips if you use Oracle in Utah USA.
 
just to keep you in the loop. Results from that metalink hcheck suggestion.... No errors and one warning found. dbv found no problems either yet a problem still exists.


Warning: Poor Default Storage Clauses (see Note:50380.1)

Tablespace Init Next Min Pct MaxExtents

TOOLS 32768 32768 32768 0 4096

Found 0 potential problems and 1 warnings

Contact Oracle Support with the output

to check if the above needs attention or not



PL/SQL procedure successfully completed.
 
Dave,

Thanks for clerifying that. Sounds like I'm OK. I'll mention your 2GB max suggestion to our DBA, but my guess is she will not want to make any changes to our production server until a problem really exists.

- Dan
 
The problem with "waiting until a problem exists" is that once it occurs, there is no simple method for recovering the inaccessible data...backups are useless since they, too, propagate the problem; exp/imp is useless since 'exp's cannot read past the "threshold" block either.

I acknowledge your DBA's option not to "rock the boat" until the need arises, but, unfortunately, in this case, once you notice/encounter the problem on one of your OSs, the "boat" has already "hit the iceberg" and there is no recovery option for the affected data.

A "happy medium" for your shop (and anyone else's) is to implement a policy of "2GB-maximum" file creations from now on. There is no loss/disadvantage in pre-creating several "autoextending to 2GB" files since Oracle will use only the space it needs from each file in a "round-robin" allocation.

[santa]Mufasa
(aka Dave of Sandy, Utah, USA)
[ Providing low-cost remote Database Admin services]
Click here to join Utah Oracle Users Group on Tek-Tips if you use Oracle in Utah USA.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top