Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Cube Processing 1

Status
Not open for further replies.

youngcougar

Programmer
Sep 29, 2004
52
CA
When the fact table has a primary key on 4 columns would this slow processing down? What is the most 'efficient' object type to process from ie table or view? I have a cube that comes from a partitioned view and it processes in approx 30 minutes with 42 million rows of sales data. When I create the same cube based off of a table where there are 4 columns that make up the primary key it takes almost over 12 hours with the same amount of data? Why?
 
This is more an issue of database tuning for the cube selection criteria rather than a cube issue. If the cube's SELECT statement can use the partition to help extract data, then it will do so and improve performance, especially if you have a multi-CPU server. That is, it can dedicate a CPU to each partition with full knowledge that there will be no overlap or missing data.

-------------------------
The trouble with doing something right the first time is that nobody appreciates how difficult it was - Steven Wright
 
I'm new enough to cubes - so what would be the most efficient way to set up such a large cube - there are only 3 measures. The fact table is daily sales by location. To this table there is the join to the time dimension that allows users to drill down to the day, location dimension, and a product dimension - I can't believe that an optimized table would take so much longer than a partition view? Is it because it is rewriting the index files on the server?
 
What version of Analysis Services and SQL are you using?

Also what are the aggregate types of your measures, and your % of aggregations?

Shoot Me! Shoot Me NOW!!!
- Daffy Duck
 
both Standard. Aggregate types are Sum. % of Aggregations meaning?
 
2K or 2005?


Shoot Me! Shoot Me NOW!!!
- Daffy Duck
 
2k sorry - can't wait till I get to try 2005 - I hear there is much improvment - we are currently testing it which I really hope will get rid of this issue. One thing I did try was removing the PK and using a identity as the PK and it did seem to pick up processing time quite a bit?
 
One thing you can do is examine the select statement issued during the cube build. if your cube has been optimized you should see

Select ColA,ColB From FactTable.

If you don't your cube is not optimized. Not using 2k for a while I am not 100% what menu it is under but if you open the cube designer and go through each of the menu drop downs you should see Optimize Cube or Optimize Schema something like that, again it has been a while. If you do this it will help, however for the optimization to work a few things have to be followed in the cube and dimension design. The 2 main points being.

1) The leaf level of your dimensions must have uniqueness of the keys set to true.
2) Your Dims and fact tables must join at the leaf level of the dimension. Yea I know this one sounds like a no brainer but very early in my OLAP career I had an instance where the Leaf od the Date dim was day, we also carried a Month Key. There was an instance where the fact table supported either a day or month join, all reporting of this data was at the month level so I joined at the month. OOPS! that was not the right decision.

Good Luck and hope this points you in the right direction.

Also if you venture in to SSAS on 2005 you may want to push the money people to spring for Enterprise as Cube partitionin is offered only in Enterprise and addresses the issue you are currently experiencing.

Shoot Me! Shoot Me NOW!!!
- Daffy Duck
 
made a huge difference - that and removing a dim that was too granular!
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top