Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations biv343 on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Business Objects performance

Status
Not open for further replies.

Klopper

MIS
Dec 7, 2000
84
US
I am looking for tips or resources on optimising the performance of BusObj and/or troubleshooting slow running.

It seems everything I do steals all the CPU power for upwards of 5 minutes - even a simple operation such as copying and pasting a cell. It means that any development of new reports is a very frustrating experience...

I am an inexperienced user but have to ask if this is normal?
All help aprreciated!
 
What version of BOb are you using, what database is your repository on, what spec PC / server are you using. I have seen the above on version 3 of BOb with low spec PCs. Are you copying and pasting cells with hundreds of thousands or rows of data in? It is certainly not normal.

Nick
ndaniels@ventura-uk.com

If it's Data Warehousing or related then I want to know - but post it to the forum!
 
Thanks for the response!
I am using: BusObj v5.1 256 mb RAM 700 mHz CPU

I guessed originally that the slow running may be due to restricted network resources(the repository is a Sybase db with 110 000 records, which resides across the Atlantic) but apparently everything is done locally on the PC once the data is drawn down.

I have noticed that the ranking function, used in most of my reports, can also slow things down considerably.

The copying/ pasting is of JUST a simple cell containing a report heading!
 
When you are creating the data cube are you including something that is likely to be unique to each record? (like a name, an address line or an ID number.... or are you generating a cube with every possible field in?....

BusObj is going to (in it's basic form) perform a SQL query that goes:
[tt]select A,B,C,D,count(*)
group by A,B,C,D[/tt]

in English that is "return me a row for each unique combination of fields A,B,C and D with a count of how many times this combination occurs".

If one of the fields has a lot of possible values, or you choose lots of fields, then you will have a lot of rows returned from your query...to a maximum of 110,000...this is quite a lot of data to move around in memory.

Check out your data query. My Home -->
 
The CPU is 100% used during the Fetch phase which consists on the data retrieval and the micro-cube building.
This is a normal BO behavior, during the Fetch phase, the CPU resources are not shared with other application. The CPU could be 100% used if there is a large amount of data to retrieve or if there is a lot of calculation to process.

To improve the response time of the query, you can modify the size of the array fetch and the table weight of the tables.

Things to consider:
A) How much RAM do you have on your PC, as microcube and variables will be stored in it.
Basic requirments are Pentium 100 Mhz processor and 32 Mo RAm, but this is very basic and bigger queries will require much more memory and processor speed.

A good test is to run the same query on a more powerful machine to see if it goes faster.

I tested reports with a 550 Mhz processor that needed 30 minutes to be opened, with the CPU usage at 100%. This is normal behaviour. To decrease the running time and with it the potential CPU usage see B)


B) BO's performance can be measured at two very different levels: at the query level itself, then at the reporting stage.

- lack of performance at the query level will often mean that the query is using up a large part of the server's resources, which also affects other users. This can be improved by
- not selecting more objects than actually needed in the report (which reduces network transfer)
- using aggregate tables via aggregate awareness (which reduces computation times on the server)
- using table-weighting (database-level optimisation of the query)
- optimising the memory and processing resources on the client machine (this applies to both stages)

- lack of performance at report level (e.g. long computation times when just opening a document, selecting a report or modifying anything in the report). This can be caused by
- nested variables. E.g. <Z>=f(<Y>), where <Y>=g(<X>), will be slower than <Z>=f(g(<X>)). (Note: if you're using variables to perform complex calculations on a regular basis, maybe it would be worth building an external function DLL !)
- use of numerous queries: don't use more synchronised queries than you need to, as synchronisation is a rather resource-consuming process
- use of rankings: ranking capabilities are a powerful feature, but this also means they take up a lot of resources. Using one is OK, but using several of these in the same block can lead to long response times
-
Finally, you can balance your queries and variables to improve whichever stage you want. For example, if you need dates to be available in both date and character string format for reporting purposes, you can either define a variable that makes the conversion for you within the document, or create a User-Defined Object (i.e. at the query level) that will allow you to have this object directly available from the microcube. The former will ensure that the server doesn't get overloaded whereas the latter ensures short computation times within the report.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top