Rockinw311
IS-IT--Management
Hi,
I'm looking for some feedback regarding how well Oracle performs when querying against, updating, and bulk loading some very large tables.
Currently, we are using Teradata, but for numerous issues, we are looking at alternative solutions. Most of our tables are in the 100 - 300 million row range. We also use a few tables that are around a billion rows. Anything under a million rows, we would consider small. We'd be looking at a new system around 4 terabytes, running on a higher end UNIX box.
From what I remember from a few years back, Oracle wasn't at a place yet where it could effectively support these tables and get them to perform. But from what I understand, they've been making some good progress with getting to a point where they could support tables this large and still perform well. Is this the case?
For the types of processing that we do, we have massive updates that take place to probably about 1/3 of our data on a nightly basis. For the majority of our processing for this cycle, we avoid transient journaling problems by doing loads into empty tables with the updated information. We also squeeze an incremental backup in there. During the day, we do a lot of complex querying. There are also a fairly decent number of smaller updates that take place to the tables, and a relatively small number of larger updates (around a couple million rows for each of these).
Is Oracle able to handle tables this large? Also, what could we expect performance to be like? Lastly, is there anywhere that provides good benchmarking results for Oracle (and other databases like Teradata) where I would be able to compare the systems?
Thanks
I'm looking for some feedback regarding how well Oracle performs when querying against, updating, and bulk loading some very large tables.
Currently, we are using Teradata, but for numerous issues, we are looking at alternative solutions. Most of our tables are in the 100 - 300 million row range. We also use a few tables that are around a billion rows. Anything under a million rows, we would consider small. We'd be looking at a new system around 4 terabytes, running on a higher end UNIX box.
From what I remember from a few years back, Oracle wasn't at a place yet where it could effectively support these tables and get them to perform. But from what I understand, they've been making some good progress with getting to a point where they could support tables this large and still perform well. Is this the case?
For the types of processing that we do, we have massive updates that take place to probably about 1/3 of our data on a nightly basis. For the majority of our processing for this cycle, we avoid transient journaling problems by doing loads into empty tables with the updated information. We also squeeze an incremental backup in there. During the day, we do a lot of complex querying. There are also a fairly decent number of smaller updates that take place to the tables, and a relatively small number of larger updates (around a couple million rows for each of these).
Is Oracle able to handle tables this large? Also, what could we expect performance to be like? Lastly, is there anywhere that provides good benchmarking results for Oracle (and other databases like Teradata) where I would be able to compare the systems?
Thanks