Just need your opinion on this. Is it really advisable to create PowerCubes and pool all the atomic data into it such that you no longer access the data warehouse?
I'm not sure if I understand exactly what you are meaning here, but if I am reading your question correctly, you want to have your cubes down to such a degree of detail that there is no reason to query the data warehouse directly?
I would argue against this practice. First of all, powercubes are intended to be used for high-level analytics, not detail reporting. We tried a 'monster cube' approach once and it failed miserably for two reasons: First, powercubes cannot handle that much data, they simply will not build! Second, even if you do get a cube to build it will suffer from long build times and poor responsiveness unless you have the auto partitioning cranked all of the way up (which leads to astonishingly LONG build times for even relatively small amounts of data).
Our approach, which works quite well, is as follows:
ONE data mart focusing on a particular business process.
THREE to FIVE (generally) powercubes built from that data mart, each focusing on specific business functions within the business process.
MULTIPLE (as needed) impromptu reports addressing specific business questions within a business function.
We also enable drill through from cube-to-cube and cube-to-report as needed or desired.
The key thing to remember here is responsiveness. Even if you have all of your data accessable via powercubes, if the performance is poor the business analysts WON'T USE THEM (trust me).
Yes, he deserves a star. This is my point actually when I argue about the practice of using PowerCubes as a detail level data mart. I know that somewhere down the line PowerCubes performance will degrade significantly when more details come in.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.