next up previous contents
Next: Insignificant Digits Up: Tipical pitfalls with floating Previous: Memory versus register operands   Contents

Cancellation (``Loss-of Significance'') Errors

When subtracting numbers that are nearly equal, the most significant digits in the operands match and cancel each other. This is no problem if the operands are exact, but in real life the operands are corrupted by errors. In this case the cancellations may prove catastrophic.

For example, we want to solve the quadratic equation


where all the coefficients are FP numbers


using our toy decimal FP system and the quadratic formula


The true solutions are , . In our FP system , , and . It is here where the cancellation occurs! Then and . If the error in is acceptable, the error in is !

The same happens in single precision:

%{}
real :: a=1.0, b=-1.0E+8, c=9.999999E+7
d  = sqrt(b**2-4.0*a*c) 
r1 = (-b+d)/(2.0*a)
r2 = (-b-d)/(2.0*a) 
e2 = (2.0*c)/(-b+d)
The exact results are and , and we expect the numerical results to be close approximations. We have and ; due to cancellation errors the computed value of is . Then and .

With

%{}
a=1.0E-3,b=-9999.999,c=-1.0E+4
The exact results are and . is calculated to be , and suffers from cancellation errors. The numerical roots are (exact!) and (about 2.5% relative error, much higher than the expected !).

To overcome this, we might avoid the cancellation by using mathematically equivalent formulas:


With this formula, , a much better approximation. For the second example . For the third example the root is (exact).


next up previous contents
Next: Insignificant Digits Up: Tipical pitfalls with floating Previous: Memory versus register operands   Contents
Adrian Sandu 2001-08-26