Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Partitioning and performance

Status
Not open for further replies.

toddyl

Technical User
Sep 26, 2005
102
US
Hi,

I have a stats table where I am getting 1 minute data from hundreds of server which have serveral processes each on them. The stats themselves are various counters which give me a rate of updates per minute.

The table I created used the following SQL:

CREATE TABLE STAT_RATE
( "SYSTEM" VARCHAR2(30 BYTE),
"PROCESS" VARCHAR2(30 BYTE),
"DAY" DATE,
"TIME" VARCHAR2(8 BYTE),
"STAT" VARCHAR2(50 BYTE),
"VALUE" NUMBER
)

However I am getting around 3 million records per day and I am trying to keep 30 days of data.

The table is now huge and takes time to query despite the following indexes:

1. unique index of system, process, day, time, stat
2. non-unique index on day
3. non-unique index on system and process
4. non-unique index on stat

I want to improve performance of this table and was thinking of using partitioning which is new to me.

Can I just partition by day and use the same tablespace?
Or is there a better way to partition?
Or is there a better way to improve performance?

Thanks for your help,

Tom
 
toddyl,

yes, you can partition and use the same tablespace, but I don't recommend it.

I believe that you have to redesign your db with partitioning in mind.

So, for your whopper table I suggest that you do an immediate range partition by date, with one datafile per day, and also one tablespace per day.

You may need to consider composite partitioning (i.e. also partition by list of systems) to further split things down.

If you make one datafile and one tablespace per day, you can then program an automatic routine to build you a new file and tablespace the day before it's needed. The same routine can age old tablespaces (and their associated file) out of your system.

By having separate tablespaces, you can make the old ones read-only, which means that you back them up once, and then never bother to do it again. if you don't bother to do this, eventually your back up pieces will become unmanageably large, and backups will take for ever to complete (i.e. longer than your avaiable window for back-ups).

Once your tablespaces and their files are read-only, dump them off onto dirt cheap mass storage (e.g. a dvd juke box) and forget about them.

Your recovery times will also improve, as oracle has an init param which tells it not to load read-only tablespaces until they're needed. Therefore during a recovery the current on-line tablespaces would be brought back first, thereby putting the system back on -line faster. Why rush to load data from last year?

Regards

T
 
Hi,

Thanks for your response.

It looks as if I may not be able to use partitioning as I appear to be locked down to using only the 1 tablespace -- part of the rules here unfortunately.

Is there any other way I can rebuild this table to help improve performance?

Thanks for your help,

Tom
 
yes,

you can still make improvements. It's not essential to have multiple tablespaces, just desirable (you can't make mouldy oldies read only, and not bother to back them up).

However, if you create a new file per day, and partition as I described above, you can just add it to the one permitted tablespace.

By enforcing this rule, your backups will take ever longer to complete, and become ever larger. Maybe you could demonstrate this to the "powers that be" and show them the error of their ways.

Also, your employers are forcing you to put all your eggs in one basket. If you had numerous tablespaces spread across mass storage, it would be possible to suffer the loss of a tablespace and not interrupt the application. If you lose the one whoppper, then the whole thing is dead, dead, dead.

If partitioning doesn't provide enough performance increase, then establish the requirements of your largest reports, and create either tables or materialized views to provide the data sufficiently fast. These will likely need to be refreshed as a batch job (since you appear to have lots of data and little room to work).

Let me know how you get on.

Regards

T
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top