"how numbers are stored and used in computers"
The hidden bit trick is a clever optimization used in floating point computation to increase the precision of numbers without requiring additional storage space. This technique is particularly relevant in the context of IEEE 754 floating point formats, which are widely used in computer systems for representing real numbers.
In floating point representation, a number is typically expressed in the form
The hidden bit trick comes into play with the mantissa. In normalized floating point numbers, the leading digit of the mantissa is always 1. Instead of storing this leading 1 explicitly, it is "hidden" and assumed to be present. This allows the storage of an additional bit of precision in the mantissa without increasing the size of the representation.
Consider a floating point number with a 4-bit exponent and a 3-bit mantissa. Normally, you might represent a number as:
With the hidden bit trick, the mantissa is interpreted as 1.110 (in binary), where the leading 1 is implicit. This effectively gives you a 4-bit mantissa precision without needing to store the extra bit.
The primary benefit of the hidden bit trick is increased precision. By assuming the leading 1 in normalized numbers, floating point formats can represent numbers more accurately within the same bit width. This is crucial in scientific computations and graphics processing, where precision is paramount.
The hidden bit trick only applies to normalized numbers. Subnormal numbers, which are used to represent values very close to zero, do not have a leading 1 in their mantissa and thus do not benefit from this trick. In these cases, the leading bit is explicitly stored as 0.