The Problem: a portion of your application is running slow. You've reviewed the code, but cannot see anything obviously wrong. You want to identify the cause of the poor performance.
The Solution (part 1): you've identified the cmdSave.Click on one of your forms as being one of the processes that is too slow. This method contains hundreds of lines of code, and calls other methods and user-defined functions.
You add the statement:
[tt]set coverage to save.log[/tt]
at the top of the cmdSave.Click, and
[tt]set coverage to[/tt]
just before any exit points (return statements).
You run the program as a user would, and click the Save button at the appropriate time. You then exit the application and find that you have a file called [tt]save.log[/tt] waiting for you.
You open the file, and see that there are many, many lines with numbers and program/method names. It is impossible for you to analyze the thousands of lines in the file, so you crank up the VFP Coverage Profiler from the Tools menu.
The Coverage Profiler takes a very long time to process your log file, and when it is done you are presented with a list of procedures and functions that were referenced during program execution.
You click the Profile View button, then wait a few minutes while things are processed again. You then see a hit counter and execution time tally next to each line of the source code. You click on a different procedure and wait another few minutes while that method's code is marked.
Yuck. This is taking a long time, and it is going to force you to look through each procedure and method looking for long execution times.
The Solution (part 2): Forget about the Coverage Profiler. Perform the log creation steps ([tt]set coverage to ...[/tt]), then do the following:
(adjust the length of the procname and filename columns as appropriate for your environment)
2) Import the log file using this:
[tt]append from save.log delimited[/tt]
3) Browse the profiler table to see that everything was imported correctly (not a necessary step, obviously)
4) Execute this command to easily see which lines are taking the longest to execute:
[tt]select *, sum(elapsed) as total, count(*) as num, ;
procname+str(linenum) as key ;
from profiler group by key ;
order by total desc ;
top 40 [/tt]
There! The longest to execute lines of code in aggregate are shown at the top, and descend from there. By aggregating the lines for our analysis, we account for individually quick lines that are contained in loops, thus giving us a better picture of how much each line contributes to the molasses factor.
This append and select operation together takes much less time than the Coverage Profiler (I can process a 200,000 line log file in about a minute).
You can try modifying the above [tt]select[/tt] statement to provide different information, such as only lines that took longer than 1 second to execute.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.