Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations gkittelson on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Losing Categories when split the IQD file

Status
Not open for further replies.

donafran

Programmer
Jul 11, 2002
71
US
Hello! We have are still on Version 6.x of everything. WE have a Transformer model that has one LARGE .IQD file as the input Data source. This cube takes a few hours to build. We are experimenting with breaking the input into Several .iqd files.
We have broken out the Customer Information to a seperate .iqd. We have also, in Trnasformer, deleted the corresponding columns from the original data source and used the "modify Columns" to get everything in sync in both data source files. I have cheked the "Show References" information and the columns now under the new Customer dsata source are show correct references.
When we run the cube build, we get the "ALL CUSTOMERS", but no vaules under this....What am I missing about splitting the .iqd ??

Please help !
 
Are you sure that you have setup the right link between datasources in Transformer. To connect (join) multiple datasources in Transformer you need to have the same columnname in the 2 datasources. When Transformer detects the same columnname between 2 datasources it will assume that there is a join. Be aware that name convention is case sensitive.

To check if the datasource has the right scope you can perform a ShowScope by right clicking on the the datasource in transformer. If you see red labels you have a uniqueness violation problem. Good solution is to double click the red label and check the Unique propertie in the Level properties sheet.

The join-column between 2 datasources should at least been used in the Dimension Map (e.g. if productid is the join, than this column should be used to populate a level in Treansformer).

Good Luck

Jack
 
Jack -
I read your reply, and in one sense, it makes sense that the datasources need to be joined, bnut in another way it doesn't make sense to me at all. If I have one feeder data source that contains the Customer Number, Customer Name, and Customer State location, why would I need these fields repeated in the Transactional Data source ???

Would I need all the Customer fields (ie, Name, State, etc) or just the Customer Number (a "key" field to join on?) in the transactional .iqd ??

What would be the point of splitting the data source from one large .iqd file to several, if you still need to have all the fields in the large .iqd file ?
 
You need all the same fields wether the IQD is one big one or several samll ones.

The reason you would want to split them up would be for performance.

If your large IQD has customer number, customer name, address, phone #, and sales $, then a good way to split them up would be IQD 1: customer number, customer name etc. (static information), and IQD 2:customer # and sales $. Note customer number is common to both acting as a key.

This way you are parsing the static customer information only once. In some cases (ours) our erp system only shows sales by customer number, and go get the customer name and address I have to join with small IQDs.

Doing it in one large IDQ is not wrong, and in some cases the only way you can do it, but if you can split the IQDs into "normalized" chunks, the cube build time is much quicker.

In addition to esselinj's comment above about the field names being case-sensitive, the details within the field are also case senstive. Transformer is unforgiving about this!

Hope this helps you,
Bruce Reed
 
In the transactioniqd you only need the foreign key (e.g. the customerkey) this will link to the structure iqd which contains all the customer dimensiondata. To have the join work effectively you should at least use the customernumber as a sourcecolumn for a level in the dimensionmap.

The way people do this is because the cube builds faster (you use leaner and meaner queries instead of one bulk query)

Hope this helps a bit more

Jack
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top