[Kevin Jacobs] #- Decimal('2.4000', precision=2, scale=1) == Decimal('2.4') #- Decimal('2.4', precision=5, scale=4) == Decimal('2.4000') #- #- Remember, these literals are frequently coming from an #- external source that must be constrained to a given schema. Facundo > I like it a lot, but not for Decimal. This is another face of "what do do with float" Just because your default context has n digits of precision doesn't mean that all your input will. If my input has 2.4000 (but the extra zeros were measured) then I want to keep that information. If my input has 2.4 (and I didn't measure beyond those digits) I want to know that for error analysis. If my input has 1.1 (only measured to two digits) but my context is 25, I don't want to assume that the machine's float-rounding is correct; the fairest "I don't know" estimate is still zero, even out to digit 100. -jJ
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4