Andrew Koenig <ark-mlist at att.net>: > On the other hand, it is pragmatically more convenient when an > implementation prints the values of floating-point literals with a small > number of significant digits with the same number of significant digits with > which they were entered. But "significant digits" is a concept that exists only in the mind of the user. How is the implementation to know how many of the digits are significant, or how many digits it was originally entered with? And what about numbers that result from a calculation, and weren't "entered" at all? Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4