Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Performance issue for TRansformer

Status
Not open for further replies.

lrnr

Programmer
Feb 4, 2004
3
CA
Hi All,

I am trying to build a cube for a massive datasource (over 2 million rows)on a Unix box. The cube build takes over 60 hours which is not acceptable. Creating the METADATA step is consuming a lot of time. Looking for ways to cut the cube build time. Any help would be appreciated.


Thanks
 
How necessary is it to have 2 million rows in your cube?
If users need to go to this detail (which I assume is transactional or the like), could you not use a cube or report drill through to expose it?

If the detail is necessary to relate separate data sources at a low level, would it be possible to make the correlation in an intermediate stage (eg data warehouse) and so speed up the metadata stage.

"Time flies like an arrow, but fruit flies like a banana."
 
What was just mentioned is correct. It sounds like there is WAY too much detail being provided. A cube is a analytical tool. Break down as much as possible so that your measures are subtotals-at various levels.
The input could certainly be 2-million rows (or alot more, our are) but the dimension/category levels are relatively few.
 
2 mil rows is nothing. We are creating cubes with 27 mil. of rows. To build this cube we need something like 2,5 hours. What is the SQL of your query? What for DBMS do you use? Did you create dimension queries and fact queries? Is the hardware of your Transformerserver fast enough?
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top