The CPU is 100% used during the Fetch phase which consists on the data retrieval and the micro-cube building.
This is a normal BO behavior, during the Fetch phase, the CPU resources are not shared with other application. The CPU could be 100% used if there is a large amount of data to retrieve or if there is a lot of calculation to process.
To improve the response time of the query, you can modify the size of the array fetch and the table weight of the tables.
Things to consider:
A) How much RAM do you have on your PC, as microcube and variables will be stored in it.
Basic requirments are Pentium 100 Mhz processor and 32 Mo RAm, but this is very basic and bigger queries will require much more memory and processor speed.
A good test is to run the same query on a more powerful machine to see if it goes faster.
I tested reports with a 550 Mhz processor that needed 30 minutes to be opened, with the CPU usage at 100%. This is normal behaviour. To decrease the running time and with it the potential CPU usage see B)
B) BO's performance can be measured at two very different levels: at the query level itself, then at the reporting stage.
- lack of performance at the query level will often mean that the query is using up a large part of the server's resources, which also affects other users. This can be improved by
- not selecting more objects than actually needed in the report (which reduces network transfer)
- using aggregate tables via aggregate awareness (which reduces computation times on the server)
- using table-weighting (database-level optimisation of the query)
- optimising the memory and processing resources on the client machine (this applies to both stages)
- lack of performance at report level (e.g. long computation times when just opening a document, selecting a report or modifying anything in the report). This can be caused by
- nested variables. E.g. <Z>=f(<Y>), where <Y>=g(<X>), will be slower than <Z>=f(g(<X>)). (Note: if you're using variables to perform complex calculations on a regular basis, maybe it would be worth building an external function DLL !)
- use of numerous queries: don't use more synchronised queries than you need to, as synchronisation is a rather resource-consuming process
- use of rankings: ranking capabilities are a powerful feature, but this also means they take up a lot of resources. Using one is OK, but using several of these in the same block can lead to long response times
-
Finally, you can balance your queries and variables to improve whichever stage you want. For example, if you need dates to be available in both date and character string format for reporting purposes, you can either define a variable that makes the conversion for you within the document, or create a User-Defined Object (i.e. at the query level) that will allow you to have this object directly available from the microcube. The former will ensure that the server doesn't get overloaded whereas the latter ensures short computation times within the report.