[Andrew Koenig] >> On the other hand, it is pragmatically more convenient when an >> implementation prints the values of floating-point literals with a >> small number of significant digits with the same number of >> significant digits with which they were entered. [Greg Ewing] > But "significant digits" is a concept that exists only > in the mind of the user. How is the implementation to > know how many of the digits are significant, or how > many digits it was originally entered with? > > And what about numbers that result from a calculation, > and weren't "entered" at all? The Decimal module has answers to such questions, following the proposed IBM decimal standard, which in turn follows long-time REXX practice. The representation is not normalized, and because of that is able to keep track of "significant" trailing zeroes. So, e.g., decimal 2.7 - 1.7 yields decimal 1.0 (neither decimal 1. nor decimal 1.00), while decimal 2.75 - 1.65 yields decimal 1.10, and 1.0 and 1.10 have different internal representations than decimal 1 and 1.1, or 1.00 and 1.100. "The rules" are spelled out in detail in the spec: http://www2.hursley.ibm.com/decimal/
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4