Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

OLAP tool and voluminous data

Status
Not open for further replies.

Guest_imported

New member
Jan 1, 1970
0
Hi All,
I have a general question about the functionality of any OLAP tool regards to the voluminous data.
Every month 30 million to 2 billion number of rows would be updated/added in the data warehouse. This is the transactional level data and it will be stored in SQL server DB. We have selected "EPiphany" on the top of SQL server.
Will any OLAP tool able to handle summary level data for such voluminous data ? Secondly, can the satisfactory performace be achieved even after highly tuning DB ? Client expects to see the data output on reports/cube or while drilling thorugh in less than 5 minutes.
Any guidelines about the performance impact will be highly appreciated.
Thanks
 
Hello TensionFree,

If you expect a lot of reporting to be done over aggregated (instead of over transactional details) data I would suggest that you run a couple of aggregates on your transactional details. These tables would typically contain a fraction of the amount of records compared to the details and will give the benefit of much quicker query responses. I use this to create a very fast aggregation table for monthly reporting. I do not know how you transfer data to your DWH environment, but using a state of the art ETL tool makes this very easy. T. Blom
Information analist
Shimano Europe
tbl@shimano-eu.com
 
While I have never actually used Epiphany, I have never heard it described as an OLAP tool. I know it as more of a CRM analytic offering with vertical specialization. But I'm sure someone from Epiphany could argue that point with me.

Regardless, if you have a TON of data, AND you expect your end-users to need access to the entire depth and breadth of data, you probably need a ROLAP engine that supports aggregate awareness.

In my opinion, MicroStrategy is the best of breed in this category. Siebel Analytics (formerly nQuire) is an up-and-comer, but is still arguably immature. Other ROLAP engines include MetaCube, owned by IBM, and Information Advantage, now owned by CA. These last two have lost a lot of market share in recent years, and from what I can see have stopped innovating, and to me that doesn't make them very good choices. Finally, vendors on the MOLAP side such as Business Objects and Hyperion Solutions are coming out with hybrid technology that allows a user to first interact with a cube, and then 'reach-through' to the database if the query requires. I don't have any experience with these though.

As Mr. Blom said, you will need to build aggregate fact tables based on your query statistics and integrate them into you business intelligence project. As to whether a particular tool - OLAP or otherwise - will intelligently recognize aggregates and choose the most efficient table for the query, that is completely vendor-specific.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top