On Tue, 13 Apr 2004, Batista, Facundo wrote: > The issue is that this limit is artificial: As long it's a long, you should > be able to make it as big as your memory let you. I agree with Tim that underflow/overflow are useful flags of something going wrong. Very few people would connect Decimal overflow with "Gee, why is Python allocating all of my system's memory?" I would like to see a soft default limit that can be modified. As for the value of the default limits, I would actually choose both a default precision and exponent limit which would be within the range represented by a double precision floating point number. This helps in two cases: 1) People will try to interconvert FP and Decimal at various points. Having one with a significantly different range will certainly give some surprises. (example: Decimal would get converted to double precision FP before being used in OpenGL or Direct3D to draw a pie chart, graph, etc.) 2) Having a default range somewhere inside double precision means that some mythical "efficient" implementation could actually use double precision FP operations to get close and then use a cleanup step to check the final digit. (Very useful for exponentials) -a
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4