Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Poor Performance

Status
Not open for further replies.

BattleDroid42

Programmer
May 17, 2002
100
0
0
US
I've been in the progress of migrating my Access backend databases to Oracle 9i through ODBC. I'm now starting to regret it and could use some good advice for resolving poor performance problems. Essentially tables containing a large number of records, say 1-2 million recs perform so poorly that I have had to go back to the Access backend tables. I have tried different drivers, indexing, partitioning and using Oracle Expert all without a sand grain's worth of sucess. It's also been a while since my performance tuning training. Any help would be much appreciated.
 
What do your queries look like?
And where do the indexes sit on your tables?

I don't believe 2 million rows are terribly significant.
200 million rows maybe ;-)
 
There's nothing special about the SQL. A simple delete query takes forever:

DELETE FROM TBL001TST01
WHERE DDATE BETWEEN '01-JUN-03'
AND '06-JUN-03';

Table has a primary key consisting of a DDATE field (DATE) and a four digit numeric ID (NUMBER P4, S0). Simple. I'm also running the query on the server itself. The server has two processors and plenty of memory too.

The same query with the same number records against the Access table takes about 1-2 minutes which is acceptable.
 
You have to consider that DDATE is a date field and not a CHAR string. You would need to explicitly convert the literal CHAR string to date eg. to_date('01-JUN-03','dd-MON-yy') or something like that
 
I agree with MauriceWalker that this problem is most probably in wrong assumption on implicit type conversion. But in any case, besides creating indexes you should also be sure that they are used properly. Thus the first thing I'd recommend to do is to investigate execution plan of your statement.

Regards, Dima
 
If this were the case then some SQL would run fine and some wouldn't. Even a simple select statement with no dates or type conversion runs very, very poorly. I have miles of existing SQL and code and examining each one at this stage would be premature. I also have no desire to redesign all the SQL and reports I've built over a two year period because Oracle is slow. I believe the problem is something systemic, perhaps ODBC.
 
Why do you think it's slow? I suppose it's fast enough, but its speed depends on the code quality. How may Oracle know that you need to compare DATE with STRING in that way you do need it? Just clarify your task - and it replies with a speed you can never even imagine with Access. So I'm also sure that if ther's a lot of conditions similar to the provided above, you'll have to redevelope it. And again, try to analyze execution plans at least for long-running queries. Ask your DBA if you don't know how.

Regards, Dima
 
I know slow when I see it and I've said what I think. SQL that took as little as 30-45 seconds to execute and come back against simple linked Access tables takes 13 minutes in Oracle. No speed at all and it's completely ridiculous. I don't have the 2+ years it would take to analyze every SQL statement in the application either. I guess I'll just migrate right back so I can get my job done.

Never mind; obviously not getting help here.
 
You don't need to analyse ALL statements, just THE WORST. There's a lot of ways to determine resource-consuming statements during runtime. Though I'd really suggest you to stay with Access, if you're sure it's much faster :)

Obviously not getting help, if not accepting it.

Regards, Dima
 
I would really prefer to use Oracle 9i primarily because it would be much easier to share data with other work groups. But it has to perform just as well and I can't see why it wouldn't.

The problem is that the entire database is sluggish. How can I make sure it's not ODBC that's the problem? How can I make sure that Oracle isn't doing a bunch of background stuff it does not need to be doing? I would even like to know which installation schemas I'm safe to do away with, particularly if Oracle is bothing with activity on them when I'll never use them.

Even the simplest SQL statement takes forever to execute. It just does not make sense to start with analyzing SQL. How can I make sure it's not an external problem first?
 
OK, one can understand your frustration. Let us try to analyse your problem a bit further. What H/W are you running on (make, O/S, CPUs and RAM). Can you also do a "show sga" and send the output please.

Also an indication of the size of your database whether you are using rule based or cost based optimiser and when your tables, indexes and partitions were last analysed i.e.

select table_name, num_rows, last_analyzed from dba_tables;

If the database (as opposed to a particular query) is generally slow most probably it indicates an instance/database or possibly the host configuration (shared memory& or semaphors)issue
 
Hello all,

We have seen very poor performance with Oracel 9i (9.2) and AIX 5.2 on a P690.. However, I think our situation is somewhat different than BattleDroid's, we have poor performance on the clients end internally things run great. We have compared with Ora 8.1.7 and AIX 5.2 on a different LPAR on the 690 and things run much quicker, with that said has anyone heard heard of TCPIP issues with 9i? We currently have the 9i DB as a test copy and are going to tune and test until we resolve this issue, any suggestions would be appreciated, thanks.
 
I do not know much about AIX. However, is your Oracle 9.2 64-bit or 32-bit. If 64-bit, have the compiled objects (views, triggers , sps, packages and functions) been recompiled under Oracle 64-bit?
 
The hardware is a Compaq ML350 PIII 1266MHz, with two processors, and 2GB memory (maxed). Front end is Microsoft Access 2000 connected through ODBC with the Oracle driver.

SQL> show sga

Total System Global Area 126557004 bytes
Fixed Size 453452 bytes
Variable Size 109051904 bytes
Database Buffers 16777216 bytes
Redo Buffers 274432 bytes

Tables and indexes are new and have not been analyzed. I've been busy analyzing some SQL and adding indexes recommended by SQL Analyzer. Doesn't seem to make a difference. Have also tried partitioning. Most tables are date related with the most recent data being accessed first. Not running in archive log mode. SQL that takes 2-3 minutes against Access tables is taking 13 minutes in Oracle.
 
Hi,

Just looking at the figures you have provided you only have 120MB SGA of which 16MB is allocated to db buffer cache, 0.6MB for redo logs and fixed memory and 104MB for the combined (large pool,buffer pool and shared pool). In my opinion you db buffer cache is set tool low resulting in MRU chains flushed out quickly. I also feel that 120MB total SGA is really too small. For example, I have Oracle 9i on Linux with max shared memory at 2GB with the following breakdown:

Code:
SQL> show sga

Total System Global Area 1427183112 bytes
Fixed Size                   452104 bytes
Variable Size             587202560 bytes
Database Buffers          838860800 bytes
Redo Buffers                 667648 bytes

You can see here that I have a total of 1361MB of SGA of which 800MB is for database buffers and 560MB for the three combined pool (large pool = 208MB, java pool = 112MB and shared pool = 224). So in summary I have allocated nearly 60% to database buffers and 40% to variable size. My suggestion is that you apportion you SGA to give around 60% to database buffer cache. Also try to up yout total SGA as well. If you have Oracle Enterprise Manager you should be able to do these easily from GUI. As a matter of interest are you running windows or Linux?
Try these suggestions and see if they help
 
Good advice from Sybaseguru, starting with your memory, next you probably want to find any bottlenecks in your I/O. Are you hitting just a couple tables with lots of physical read/writes, where are those tables located? Are they on the same disk? I think you can find this info in v$datafile and v$filestat,

 
I'd suggest you to increase Database Buffers by 8 at least and run

DBMS_STATS.GATHER_SCHEMA_STATS(<USERNAME>)

and then launch that query again

Without statistics CBO is completely unusefull, because it has NO DATA to rely upon.

Regards, Dima
 
can u connect to ur Server with SQLPLUS and execute a simple SQL , and check the time? if its faster there then it must be something with ODBC else some problem on the serverside.
 
Oops, I've missed the number of digits, you may leave your Database Buffers untouched (at least now).

Regards, Dima
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top