sybaseguru
Instructor
May be a useful tip
This will still work OK for both Oracle 8i and 9i. My preferred method for update stats in Oracle 9i is to use the Oracle 9i utility
This works fine with no side effects.
--Notes
The old fashioned "analyze table" and dbms_utility methods for generating CBO statistics are obsolete and somewhat dangerous to SQL performance. This is because the cost-based SQL Optimizer (CBO) relies on the quality of the statistics to choose the best execution plan for all SQL statements. The dbms_stats utility does a far better job in estimating statistics, especially for large partitioned tables, and the better stats results in faster SQL execution plans.
For individual schemas you can use an example given below
hope this helps
Code:
If you have scripts in 8i that use dbms_utility.analyze_database('COMPUTE') for updating stats for CBO in Oracle 8i, you may end up with a bit of shock when migrating to Oracle 9i. This is because although the above utility works fine in Oracle 8i, it can skew the statistics for the "sys" schema. In other words you data dictionary tables. In that case performance of 9i will be miserable. If you intend to use standard stats utility for both Oracle 8i and 9i, you should replace:
dbms_utility.analyze_database('COMPUTE')
with
dbms_utility.analyze_database('COMPUTE');dbms_utility.analyze_schema('SYS','DELETE');
This will still work OK for both Oracle 8i and 9i. My preferred method for update stats in Oracle 9i is to use the Oracle 9i utility
Code:
exec dbms_stats.gather_database_stats(cascade=>true);
This works fine with no side effects.
--Notes
The old fashioned "analyze table" and dbms_utility methods for generating CBO statistics are obsolete and somewhat dangerous to SQL performance. This is because the cost-based SQL Optimizer (CBO) relies on the quality of the statistics to choose the best execution plan for all SQL statements. The dbms_stats utility does a far better job in estimating statistics, especially for large partitioned tables, and the better stats results in faster SQL execution plans.
For individual schemas you can use an example given below
Code:
exec dbms_stats.gather_schema_stats( -
ownname => 'SCOTT', -
options => 'GATHER AUTO', -
estimate_percent => dbms_stats.auto_sample_size, -
method_opt => 'for all columns size repeat', -
degree => 34 -
)
There are several values for the options parameter that we need to know about:
gather - re-analyzes the whole schema.
gather empty - Only analyze tables that have no existing statistics.
gather stale - Only re-analyze tables with more than 10% modifications (inserts, updates, deletes).
gather auto - This will re-analyze objects which currently have no statistics and objects with stale statistics. Using gather auto is like combining gather stale and gather empty.
Note that both gather stale and gather auto require monitoring. If you issue the "alter table xxx monitoring" command, Oracle tracks changed tables with the dba_tab_modifications view. Below we see that the exact number of inserts, updates and deletes are tracked since the last analysis of statistics.
SQL> desc dba_tab_modifications;
Name Type
--------------------------------
TABLE_OWNER VARCHAR2(30)
TABLE_NAME VARCHAR2(30)
PARTITION_NAME VARCHAR2(30)
SUBPARTITION_NAME VARCHAR2(30)
INSERTS NUMBER
UPDATES NUMBER
DELETES NUMBER
TIMESTAMP DATE
TRUNCATED VARCHAR2(3)
The most interesting of these options is the gather stale option. Because all statistics will become stale quickly in a busy OLTP database like most front office trading systems, we must remember the rule for gather stale is > 10% row change (based on num_rows at statistics collection time).
Hence, almost every table except read-only tables will be re-analyzed with the gather stale option. Hence, the gather stale option is best for systems that are largely read-only. For example, if only 5% of the database tables get significant updates, then only 5% of the tables will be re-analyzed with the "gather stale" option.
hope this helps