[Shane Hathaway] > It seems like most people who write '1.1' don't really want to dive > into serious floating-point work. I wonder if literals like 1.1 > should generate decimal objects as described by PEP 327, rather than > floats. Operations on decimals that have no finite decimal > representation (like 1.1 / 3) should use floating-point arithmetic > internally, but they should store in decimal rather than floating-point > format. Well, one of the points of the Decimal module is that it gives results that "look like" people get from pencil-and-paper math (or hand calculators). So, e.g., I think the newbie traumatized by not getting back "0.1" after typing 0.1 would get just as traumitized if moving to binary fp internally caused 1.1 / 3.3 to look like 0.33333333333333337 instead of 0.33333333333333333 If they stick to Decimal throughout, they will get the latter result (and they'll continue to get a string of 3's for as many digits as they care to ask for). Decimal doesn't suffer string<->float conversion errors, but beyond that it's prone to all the same other sources of error as binary fp. Decimal's saving grace is that the user can boost working precision to well beyond the digits they care about in the end. Kahan always wrote that the best feature of IEEE-754 to ease the lives of the fp-naive is the "double extended" format, and HW support for that is built in to all Pentium chips. Alas, most compilers and languages give no access to it. The only thing Decimal will have against it in the end is runtime sloth.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4