"how numbers are stored in computers"
Different rounding strategies affect the results of basic math operations like addition, subtraction, multiplication, and division. There are two key ways to measure rounding error: ulps (units in the last place) and relative error.
Relative error measures the size of the error in proportion to the true value. It provides a scale-invariant way to describe error (i.e. meaningful across different magnitudes), which is especially important in floating-point systems where numbers can span many orders of magnitude.
Given an exact value
Relative error is dimensionless and generally expressed as a decimal or percentage.
Most operations or representations in floating-point round the result to the nearest representable number. This introduces a small error:
Where
In IEEE 754 double precision,
When subtracting nearly equal numbers, significant digits can be lost, inflating relative error:
If
Operations like multiplication, division, and function evaluation (e.g.,
Floating-point representations maintain relative precision, which means that large numbers have large absolute spacing between representable values, and small numbers have tightly packed representable values. This can cause accumulated relative error in summation of many large values, and potential loss of significance when subtracting close values (catastrophic cancellation).
Relative error should often guide comparisons:
code.py1def nearly_equal(a, b, rel_tol=1e-9): 2 return abs(a - b) <= rel_tol * max(abs(a), abs(b))
This approach handles floating-point comparisons more robustly than absolute difference alone.
Relative error is a critical metric for understanding and managing the limitations of floating-point arithmetic. It provides a scale-aware way to analyze errors introduced by rounding, computation, and algorithmic instability.
By keeping an eye on relative error, developers and scientists can: