I'm wondering if anyone can give any advice on tracing a rounding error in a program.
I have two slightly different implementations of a process, which basically come down to the following.
In Version 1, various increments are added to a [tt]double[/tt] variable, and at the end I check to see if it is greater or equal than a threshold variable.
In Version 2, the increments are summed separately and the total is then added to the variable. The variable is then compared to the threshold as normal.
In some cases the values produced are very slightly different (on the order of 2.220446e-16). I've been trying to trace exactly why but have not been able to come up with a clear-cut example of what it is that makes the difference.
Any suggestions/comments?
Thanks in advance!
I have two slightly different implementations of a process, which basically come down to the following.
In Version 1, various increments are added to a [tt]double[/tt] variable, and at the end I check to see if it is greater or equal than a threshold variable.
In Version 2, the increments are summed separately and the total is then added to the variable. The variable is then compared to the threshold as normal.
In some cases the values produced are very slightly different (on the order of 2.220446e-16). I've been trying to trace exactly why but have not been able to come up with a clear-cut example of what it is that makes the difference.
Any suggestions/comments?
Thanks in advance!