Recently I've upgraded our analyze scripts to dbms_stats against a particular schema on oracle 9i and employed the parallel processing option. The script loops through a subset of the tables as not all need to be analyzed and also this saves time.
Following doing this we had a massive outage caused by an extremely inefficient explain plan being generated on a set of 8 tables (all of which large). The query went from running in 61ms to 21 minutes, these basically backed up onto user processes until nothing worked.
Can you explain how dbms_stats would cause the COB to choose a different route for a select.
Following doing this we had a massive outage caused by an extremely inefficient explain plan being generated on a set of 8 tables (all of which large). The query went from running in 61ms to 21 minutes, these basically backed up onto user processes until nothing worked.
Can you explain how dbms_stats would cause the COB to choose a different route for a select.