Jan 16, 2002 ------------ - class taught by Prof. Cal Ribbens - Relative error vs. absolute error - application: testing for floating point equality - Introduction to representations - self study (sec 2.1) - conversion between bases - Floating point representation - Motivation: handle extremes of scale and provide relatively uniform distribution of exactly representabl numbers ... all in a fixed number of bits - IEEE (32-bit) standard - fl(x) = mantissa * 2^(exponent), where - mantissa = 1.____________ (23 bits) - exponent = 8 bit "excess-127" representation - and one sign bit - Related terminology - machine epsilon: 2^-23 (smallest positive numbr x s.t. fl(1+x) > 1) - overflow - underflow - double precision Jan 18, 2002 ------------ - Roundoff error and loss of significance - occurs when a number cannot be exactly represented in available number of bits - error in representation - Implication: for 32-bit representation, no number has more than 6 or 7 significant digits - Propagation of roundoff error - multiplication - addition - example of "cancelation error" - Avoiding error propagation - see hints in Sec 2.3 - classic example: quadratic formula Jan 21, 2002 ------------ - Naren returns - asks questions about what happened - Review number representation - finite decimal can correspond to infinite binary representation - but finite binary always corresponds to finite decimal (why?) - Moral of the story - Even before you started to compute anything, there is error! - What happens to numbers on the computer? - chopping (Procrustes analogy) - easy to study - rounding (an example from House apportionment in the US Congress) - more difficult to study - Review IEEE 32-bit standard - relative error for rounding <= 2^-24 (eps/2) - relative error for chopping <= 2^-23 (eps) - eps is smallest positive number > 1 - 1 - eps: unit roundoff error, machine precision, or just eps - Multiplication and division are benign - not so addition with opposite signs - In addition and subtraction - Obs 1: coefficients can get arbitrarily large - Obs 2: If x & y are same sign, coefficients of relative error are positive and bounded by 1 (addition is benign) - Obs 3: If x & y are different signs, coefficients of relative error can get arbitrarily large (compared to x and y) => cancelation erorr (the Achilles Heel of numerical methods) - Moral of the story: - Avoid subtraction or addition with different signs - Revisit Quadratic Equation - Review Problems - Story of the Pentium Affair