For Analysis Services, what is the most effective method
or strategy for processing a cube with a large amount of
data such as my Web Logs taken from IIS? For the fact
table, with the grain specified as per click, this equates
to approximately 100 million rows per month or 1 billion
rows per year.
Note: I am exploring several options that could
effectively partition the data (i.e. virtual cubes.)
or strategy for processing a cube with a large amount of
data such as my Web Logs taken from IIS? For the fact
table, with the grain specified as per click, this equates
to approximately 100 million rows per month or 1 billion
rows per year.
Note: I am exploring several options that could
effectively partition the data (i.e. virtual cubes.)