Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

fmod quick?

Status
Not open for further replies.

Disruptive

Programmer
Mar 8, 2005
40
0
0
GB
Is there a quicker alternative that someone knows about?

is it quicker in peoples experience to use ifs?

fmod((xpos + Xbox), Xbox)

ALSO

I am toggling between 0 and 1, I use the following code

(currenttoggle + 1) % 2;

which works fine, however I am looking for speed-up.

Any ideas?

Cheers

 
Have you done any profiling to work out where the real performance hot-spots are?

You need to have a "big picture" view of what your code is actually doing in response to real-world data, not plucking out random one-liners which you suspect of being a problem.

> fmod((xpos + Xbox), Xbox)
> (currenttoggle + 1) % 2;
This is just micro-optimisation gone wild. Most good compilers nowadays are capable of generating pretty optimal code (mine for example eliminates the obvious modulo for something else). Whatever one-line tricks you can come up with, the compiler may already do it (and many more besides).
In all but the most extreme cases, you might gain a few percent, and wreck the readability of your code in the process.

The real performance wins (like orders of magnitude) come from choosing better algorithms to start with. No amount of hand tweaking a bubble sort for example will ever cause it to beat a quicksort for large data sets.

But if you want a 0,1 toggle, just do
Code:
currenttoggle ^= 1;

--
 
It might be micro-optimisation, but faced with limited scope for changing the code, it makes sense to try particularly with repeated commands as a small time saving is multiplied.

How do you suggest placing timers into the code to time execution of certain elements?
 
I usually use the clock() function before and after the section of code I'm timing, then subtract the end time from the start time.
 
Which OS and compiler do you have?
Some compilers come with profilers and such-like (gcc for example).

> but faced with limited scope for changing the code
Your most cost-effective solution might be to just buy the fastest machine you can lay your hands on at the moment. If you can double the CPU speed for basically the cost of a weeks labour, then it might be worth going for.

--
 
How often are those lines of code actually called each second?
 
(1) If you can't change the code, you can't make it faster.

(2) Yes, you're right that small savings in large loops can be worthwhile, but you absolutely must develop a way to time your code, even if it's as crude as pressing the button and using a stopwatch (if you've got a numerical processing task that takes 30 secs, a stopwatch is actually quite adequate!). It is very easy to waste a lot of time fiddling about with minor changes in a loop, without realising that the vast majority of processing time happens to be spent reading the hard disk (or some such activity) and the entire loop execution time is utterly irrelevant. If you make a 10% improvement on a 1% component of the system, overall it will take 29.97 secs instead of 30secs. Will your end-user notice?

If you don't have a profiler, try missing out great chunks of program and see what happens to speed. A lot of routine number-crunching stuff is a series of operations that don't really care what numbers they're fed, so you can feed the 'wrong' results and still get an idea which steps are taking the time.

(3) Don't ever trust anyone's opinion on what "ought" to be faster. So much depends on context that it's really hard to predict how a system will perform. As a simple example, bubble-sort will perform as well as quick-sort if you feed it data that's already nearly in the right order. Some sorting algorithms are good for random data, but have poor "worst case" scenarios, which might be scenarios that can realistically appear (e.g. nearly exactly reversed order).

 
1. yes of course I can change the code, but there are limits to what I can do. Integrity of data remains at the foremost.

This is a well used funtion in the program which is why I am udertaking the speed analysis. The code is for a numerical simuation and hence at every move the mod has the potential to be executed. Of course there are other techniques such as shared memory which I am currently using, however I would like to clean the base code where possible.

3. There are often multiple ways to perform the operation and some are familiar with techniques for enhancements through experience. Of course f.p arithmetic is slower than integer and we can be sure of that no matter what machine we run on.
 
In addition to my other (as yet unanswered) questions.

1. How long does a typical program run take at the moment (5 hours?)

2. How much time are you hoping to save (10 minutes?)

3. How often do you run the program (once a day?)

> Of course f.p arithmetic is slower than integer and we can be sure of that no matter what machine we run on.
Most modern processors are a hell of a lot better at FP than older processors.

--
 
Typical length of time is 1 week. Complex simulation, so you can imagine the typical time saving that removing a slow expression. I might be called fmod 1 Billion times so if I can avoid using it and replace with something quicker than that would be an advantage.

As for processing power, we have the best that we can do. Thatss why I am looking to optimise the code.
 
This code illustrates how to use a couple of fast clocks available on Linux running on pentium processors.
Code:
#include <stdio.h>
#include <math.h>
#include <unistd.h>
#include <sys/time.h>

/* The timestamp counter is a 64-bit register on pentium */
/* processors which increments at the clock rate of the CPU */
/* This is the inline asm instruction to read that counter. */
#define RDTSC(llptr) { \
    __asm__ __volatile__ ( \
    "rdtsc" \
  : "=A" (llptr) ); \
}

unsigned long long readTSC ( void ) {
  unsigned long long result;
  RDTSC(result);
  return result;
}

/* Time of day has a microsecond counter */
unsigned long long readTOD ( void ) {
  unsigned long long result;
  struct timeval now;
  gettimeofday(&now,NULL);
  return (unsigned long long)now.tv_sec * 1000000 + now.tv_usec;
}

/* Back-to-back calls, to check the overhead of doing nothing */
void cal1 ( void ) {
  unsigned long long r1, r2, r3, r4;
  r1 = readTSC();
  r2 = readTSC();
  r3 = readTOD();
  r4 = readTOD();
  printf( "TSC delta ticks=%llu, TOD delta uSec=%llu\n",
          r2 - r1, r4 - r3 );
}

/* The clock values for about 1 elapsed second */
void cal2 ( void ) {
  unsigned long long r1, r2, r3, r4;
  r1 = readTSC();
  sleep(1);
  r2 = readTSC();
  r3 = readTOD();
  sleep(1);
  r4 = readTOD();
  printf( "TSC ticks/second=%llu, TOD uSec/second=%llu\n",
          r2 - r1, r4 - r3 );
}

/* Now measure fmod itself */
void fmod_test ( ) {
  double result;
  unsigned long long r1, r2;
  r1 = readTSC();
  result = fmod( 1234.56, 78.9 );
  r2 = readTSC();
  printf( "Result=%f, ticks=%llu\n", result, r2 - r1 );
}

int main ( ) {
  int i;
  for ( i = 0 ; i < 5 ; i++ ) {
    cal1();
  }
  for ( i = 0 ; i < 5 ; i++ ) {
    cal2();
  }
  for ( i = 0 ; i < 5 ; i++ ) {
    fmod_test();
  }
  return 0;
}

My results
[tt]$ gcc foo.c -lm ; ./a.out
TSC delta ticks=92, TOD delta uSec=5
TSC delta ticks=104, TOD delta uSec=5
TSC delta ticks=108, TOD delta uSec=4
TSC delta ticks=108, TOD delta uSec=4
TSC delta ticks=108, TOD delta uSec=4
TSC ticks/second=1802956732, TOD uSec/second=1001845
TSC ticks/second=1803017612, TOD uSec/second=1001847
TSC ticks/second=1803017620, TOD uSec/second=1001846
TSC ticks/second=1803022320, TOD uSec/second=1001846
TSC ticks/second=1803327624, TOD uSec/second=1001846
Result=51.060000, ticks=68420
Result=51.060000, ticks=3480
Result=51.060000, ticks=2996
Result=51.060000, ticks=3004
Result=51.060000, ticks=2996
[/tt]
My processor is detected as "Intel(R) Pentium(R) 4 CPU 1.80GHz stepping 02", which is why I get about 1.8 billon ticks per second.

So on my machine, a single fmod call takes about 2 microseconds. Over a billon calls, this equates to about 30 minutes of machine time (assuming you could eliminate it completely). You're going to have to come up with something a lot more significant than a few minutes a week.

Profiling
[tt]
$ gcc -pg foo.c -lm ; ./a.out ; gprof a.out > tmp.txt
Flat profile:

Each sample counts as 0.01 seconds.
no time accumulated

% cumulative self self total
time seconds seconds calls Ts/call Ts/call name
0.00 0.00 0.00 30 0.00 0.00 readTSC
0.00 0.00 0.00 20 0.00 0.00 readTOD
0.00 0.00 0.00 5 0.00 0.00 cal1
0.00 0.00 0.00 5 0.00 0.00 cal2
0.00 0.00 0.00 5 0.00 0.00 fmod_test

% the percentage of the total running time of the
time program used by this function.

cumulative a running sum of the number of seconds accounted
seconds for by this function and those listed above it.

self the number of seconds accounted for by this
seconds function alone. This is the major sort for this
listing.

calls the number of times this function was invoked, if
this function is profiled, else blank.

self the average number of milliseconds spent in this
ms/call function per call, if this function is profiled,
else blank.

total the average number of milliseconds spent in this
ms/call function and its descendents per call, if this
function is profiled, else blank.

name the name of the function. This is the minor sort
for this listing. The index shows the location of
the function in the gprof listing. If the index is
in parenthesis it shows where it would appear in
the gprof listing if it were to be printed.

Call graph (explanation follows)


granularity: each sample hit covers 2 byte(s) no time propagated

index % time self children called name
0.00 0.00 10/30 cal1 [3]
0.00 0.00 10/30 cal2 [4]
0.00 0.00 10/30 fmod_test [5]
[1] 0.0 0.00 0.00 30 readTSC [1]
-----------------------------------------------
0.00 0.00 10/20 cal1 [3]
0.00 0.00 10/20 cal2 [4]
[2] 0.0 0.00 0.00 20 readTOD [2]
-----------------------------------------------
0.00 0.00 5/5 main [12]
[3] 0.0 0.00 0.00 5 cal1 [3]
0.00 0.00 10/30 readTSC [1]
0.00 0.00 10/20 readTOD [2]
-----------------------------------------------
0.00 0.00 5/5 main [12]
[4] 0.0 0.00 0.00 5 cal2 [4]
0.00 0.00 10/30 readTSC [1]
0.00 0.00 10/20 readTOD [2]
-----------------------------------------------
0.00 0.00 5/5 main [12]
[5] 0.0 0.00 0.00 5 fmod_test [5]
0.00 0.00 10/30 readTSC [1]
-----------------------------------------------

This table describes the call tree of the program, and was sorted by
the total amount of time spent in each function and its children.

Each entry in this table consists of several lines. The line with the
index number at the left hand margin lists the current function.
The lines above it list the functions that called this function,
and the lines below it list the functions this one called.
This line lists:
index A unique number given to each element of the table.
Index numbers are sorted numerically.
The index number is printed next to every function name so
it is easier to look up where the function in the table.

% time This is the percentage of the `total' time that was spent
in this function and its children. Note that due to
different viewpoints, functions excluded by options, etc,
these numbers will NOT add up to 100%.

self This is the total amount of time spent in this function.

children This is the total amount of time propagated into this
function by its children.

called This is the number of times the function was called.
If the function called itself recursively, the number
only includes non-recursive calls, and is followed by
a `+' and the number of recursive calls.

name The name of the current function. The index number is
printed after it. If the function is a member of a
cycle, the cycle number is printed between the
function's name and the index number.


For the function's parents, the fields have the following meanings:

self This is the amount of time that was propagated directly
from the function into this parent.

children This is the amount of time that was propagated from
the function's children into this parent.

called This is the number of times this parent called the
function `/' the total number of times the function
was called. Recursive calls to the function are not
included in the number after the `/'.

name This is the name of the parent. The parent's index
number is printed after it. If the parent is a
member of a cycle, the cycle number is printed between
the name and the index number.

If the parents of the function cannot be determined, the word
`<spontaneous>' is printed in the `name' field, and all the other
fields are blank.

For the function's children, the fields have the following meanings:

self This is the amount of time that was propagated directly
from the child into the function.

children This is the amount of time that was propagated from the
child's children to the function.

called This is the number of times the function called
this child `/' the total number of times the child
was called. Recursive calls by the child are not
listed in the number after the `/'.

name This is the name of the child. The child's index
number is printed after it. If the child is a
member of a cycle, the cycle number is printed
between the name and the index number.

If there are any cycles (circles) in the call graph, there is an
entry for the cycle-as-a-whole. This entry shows who called the
cycle (as parents) and the members of the cycle (as children.)
The `+' recursive calls entry shows the number of function calls that
were internal to the cycle, and the calls entry for each member shows,
for that member, how many times it was called from other members of
the cycle.


Index by function name

[3] cal1 [5] fmod_test [1] readTSC
[4] cal2 [2] readTOD

[/tt]
I would suggest lots of experiments with simple code you can readily understand. The output from gprof is kinda hard to understand at first.
Very little time is actually used (sleeping doesn't count), so that is why all the %time and seconds are zero. But it should be pretty self explanatory for the number of calls each function makes.

If you use the profiler on your code, with a representative data sample which takes say an hour to calculate, then you should see lots of useful information about which functions get called most often, and which routines take the most time. These are things you should be focussing on.

--
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top