Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

How do I use Coverage to determine the slow parts of my application?

Debugging and Tuning

How do I use Coverage to determine the slow parts of my application?

by  foxdev  Posted    (Edited  )
The Problem: a portion of your application is running slow. You've reviewed the code, but cannot see anything obviously wrong. You want to identify the cause of the poor performance.

The Solution (part 1): you've identified the cmdSave.Click on one of your forms as being one of the processes that is too slow. This method contains hundreds of lines of code, and calls other methods and user-defined functions.

You add the statement:
[tt]set coverage to save.log[/tt]
at the top of the cmdSave.Click, and
[tt]set coverage to[/tt]
just before any exit points (return statements).

You run the program as a user would, and click the Save button at the appropriate time. You then exit the application and find that you have a file called [tt]save.log[/tt] waiting for you.

You open the file, and see that there are many, many lines with numbers and program/method names. It is impossible for you to analyze the thousands of lines in the file, so you crank up the VFP Coverage Profiler from the Tools menu.

The Coverage Profiler takes a very long time to process your log file, and when it is done you are presented with a list of procedures and functions that were referenced during program execution.

You click the Profile View button, then wait a few minutes while things are processed again. You then see a hit counter and execution time tally next to each line of the source code. You click on a different procedure and wait another few minutes while that method's code is marked.

Yuck. This is taking a long time, and it is going to force you to look through each procedure and method looking for long execution times.

The Solution (part 2): Forget about the Coverage Profiler. Perform the log creation steps ([tt]set coverage to ...[/tt]), then do the following:

1) Create a table using this:
[tt]create table profiler free ;
(elapsed N(11,3), dummy c(1), procname c(30), ;
linenum N(5,0), filename c(100))[/tt]

(adjust the length of the procname and filename columns as appropriate for your environment)

2) Import the log file using this:
[tt]append from save.log delimited[/tt]

3) Browse the profiler table to see that everything was imported correctly (not a necessary step, obviously)

4) Execute this command to easily see which lines are taking the longest to execute:
[tt]select *, sum(elapsed) as total, count(*) as num, ;
procname+str(linenum) as key ;
from profiler group by key ;
order by total desc ;
top 40 [/tt]

There! The longest to execute lines of code in aggregate are shown at the top, and descend from there. By aggregating the lines for our analysis, we account for individually quick lines that are contained in loops, thus giving us a better picture of how much each line contributes to the molasses factor.

This append and select operation together takes much less time than the Coverage Profiler (I can process a 200,000 line log file in about a minute).

You can try modifying the above [tt]select[/tt] statement to provide different information, such as only lines that took longer than 1 second to execute.
Register to rate this FAQ  : BAD 1 2 3 4 5 6 7 8 9 10 GOOD
Please Note: 1 is Bad, 10 is Good :-)

Part and Inventory Search

Back
Top