Forgive me if I'm being a pest, but no one has commented on the real reason I asked the question. What does everyone think of the idea of having these three built-in numeric types? 1) An Int implemented with infinite precision integer (Python Longs) with the constant/__str__ form of +-NN such as 0, -123, 173_394, etc. 2) A Decimal implemented with the .Net decimal float (or the IBM decimal if the .Net decimal sucks too much) with the constant/__str__ form of +-NN.NN such as 0.0, -123.0, 173_394.0, 173394.912786765, etc. 3) A binary Float implemented with the hardware floating point with the constant/__str__ form of +-NN.NN+-eNN such as 0e0, -123e0, 173.39e+3, 2.35e-78, etc. There would be no automatic conversion except the / operator would convert from Int to Decimal and the Math module would convert Int and Decimal values to Float for almost all functions (except simple ones like abs, min, max).
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4