From Wikipedia, the free encyclopedia
In computing, a normal number is a non-zero number in a floating-point representation which is within the balanced range supported by a given floating-point format: it is a floating point number that can be represented without leading zeros in its significand.
The magnitude of the smallest normal number in a format is given by:
b E min {\displaystyle b^{E_{\text{min}}}}
where b is the base (radix) of the format (like common values 2 or 10, for binary and decimal number systems), and E min {\textstyle E_{\text{min}}} depends on the size and layout of the format.
Similarly, the magnitude of the largest normal number in a format is given by
where p is the precision of the format in digits and E min {\textstyle E_{\text{min}}} is related to E max {\textstyle E_{\text{max}}} as:
E min ≡ Δ 1 − E max = ( − E max ) + 1 {\displaystyle E_{\text{min}}\,{\overset {\Delta }{\equiv }}\,1-E_{\text{max}}=\left(-E_{\text{max}}\right)+1}
In the IEEE 754 binary and decimal formats, b, p, E min {\textstyle E_{\text{min}}} , and E max {\textstyle E_{\text{max}}} have the following values:[1]
For example, in the smallest decimal format in the table (decimal32), the range of positive normal numbers is 10−95 through 9.999999 × 1096.
Non-zero numbers smaller in magnitude than the smallest normal number are called subnormal numbers (or denormal numbers).
Zero is considered neither normal nor subnormal.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4