Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations TouchToneTommy on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Analysis Services 2005 Cube Processing Estimate

Status
Not open for further replies.

pjohns

Programmer
Mar 21, 2007
25
Does anyone know of any hard-and-fast rules that can be used to approximate cube processing times (full and incremental) for a production system?

Thank you.
 
The only way to get a good idea is to have a test enviroment that mirrors your production enviroment. There are ways to minimize your build times. If your cube is fairly large and processing is a factor then you should being using partitions, this allows you to process only the partitions that have data for that partition definition. More common partition strategies are based upon time at the week or month level depending on data volume. Hardware also plays a role in process times. I found that a 64bit dual socket quad core could process my cubes about 75% faster than an older dual proc with HT server, with the same amount of memory.

Paul
---------------------------------------
Shoot Me! Shoot Me NOW!!!
- Daffy Duck
 
Thanks, Paul. Just curious.....how big are your cubes and how long does it take to process each of them?

Thanks.
 
my current cubes are fairly small. Avg is 750000 records a month with a 3-5 year load. I have worked with cubes as large as 2.6Billion records per year with 3 years of data loaded into a cube.

I can load 1 month in approx 2 minutes using the 64Bit hardware mentioned above.

One other factor that will have a large impact on process times and query performance is your aggregations. My cubes are fairly heavily aggregated. When doing your aggregations I recommend taking it in small steps as you may hit the point where a single point of aggregations will significantly impact build time and storage. On the 3.5B per year cube I had 1 aggregation that took almost an hour and cost 40% more in storage.

If your using SQL2K then the use of a distinct count in your cube can affect process times.

Paul
---------------------------------------
Shoot Me! Shoot Me NOW!!!
- Daffy Duck
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top