Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Maximum number of tables and performance issues

Status
Not open for further replies.

stevecal

Technical User
May 10, 2000
78
DK
Hi,

I am working on a database which potentially has a large number of tables. While I appreciate that Oracle theoretically supports an unlimited number of tables within a database, are there any known major impacts on performance or should I be looking to find ways to coalesce some of the data which I (optimally) would like to hold discretely?
 
Does this mean that the number of tables is out of control? This is a good reason for redesiging your system :)
In fact performance is not the only requirement for a good system, another is its managebility.
And after all, what number of tables do you consider to be large?

Regards, Dima
 
Actually, no it doesn't. We have volume (billing) data being generated for rapid (web) deployment and display, for which we have a number of dynamically created tables on a per account basis. These bill tables contain (potentially) large amounts of data (megabytes worth in some instances), and so simple coalescence is out of the question.

Manageability is not the issue since the tables are maintained by a table management process which is very slick, combing out redundant data, etc, during periods of low usage.

Performance is not the issue at the moment, it delivers very efficiently, however, I envisage a day coming when the number of accounts (and the consequent number of tables) /may/ become an issue and I have a logical split in mind for when that day comes by migrating private clients to one database and business clients to another.

I am really trying here to get a feel for what the impacts of increasing a volume number of tables will be by orders of magnitude.

 
We could probably squeeze a little bit more performance out of the system by generating empty tables from a sql script prior to the customer first accessing the system. We have looked at it and while it represents a hit, at the moment the (performance hit) : (effort required to modify this behaviour) ratio is entirely disproportionate. There may come a time when this becomes necessary, and thanks for bringing it to my attention as a possible target.

However, what I am really trying to discover here is whether there are performance implications contingent on the //number// of tables involved, and whether there are particular numeric ceilings which we may encounter.
 
Steve,

I don't know what Oracle's published maximums are for number of tables in a join, but I just successfully ran a join with 1,030 tables (to ensure that there was no ceiling at 1024). The query took 13 minutes 41 seconds to parse and run, but it ran. Are you planning more than 1,030 tables to join?

[santa]Mufasa
(aka Dave of Sandy, Utah, USA @ 16:41 (13May04) UTC (aka "GMT" and "Zulu"), 09:41 (13May04) Mountain Time)
 
...or did I miss the point entirely and you were just wondering how many tables an Oracle database can "hold" rather than "join"?

[santa]Mufasa
(aka Dave of Sandy, Utah, USA @ 16:44 (13May04) UTC (aka "GMT" and "Zulu"), 09:44 (13May04) Mountain Time)
 
The numbers of tables shouldn't impact except to the degree that each table will have a minimum size (extents), which is unlikely to be full, so you'll have more space allocated than you could otherwise use. This could impact disk access times.

However from a design point of view, it would be a mistake to have multiple tables of the same structure wither either slightly different names or in different schemas.

Every time an SQL statement goes to Oracle goes to Oracle, it needs to do a parse. If it 'remembers' running the statement earlier, it's pretty quick, otherwise its signficantly slower (ie has to work out a query plan, look at table/index definitions, statistics etc). [When I say slower, I'm talking 10ths/100ths of seconds]

If you are running lots of slightly different SQLs (eg SELECT...FROM table_fred1, SELECT....FROM table_bill2 etc), you'll be doing lots more of these slower 'hard parses', which will add up, sort of like having a heavy load in the boot of the car making it a bit sluggish.

Oracle's solution to this is called either FGAC (Fine Grained Access Control) or VPD (Virtual private Databases). Basicaly, it means that behind the scenes, Oracle can rewrite statements to add in a clause to limit what users see/affect in a table. That means that Bill can do a DELETE FROM TABLE_TRANS; and only Bill's rows will be deleted and Fred's rows in the same table will remain untouched (and invisible to Bill).
All the users share the same table and the same queries, so Oracle hs a better chance of 'remembering' the queries.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top