On Mon, 9 Aug 2004 14:20:15 -0400, Tim Peters wrote: > [Mark Hahn] >> Forgive me if I'm being a pest, > > Nope, not at all. > >> but no one has commented on the real reason I asked the question. What >> does everyone think of the idea of having these three built-in numeric types? > > It depends on your goals for Prothon, and also on the relative > performance of the various numeric implementations. Semantically, it > would be nicest if Python used the IBM decimal type for everything by > default (whether spelled 123, 123.0, or 123e0). Are you sure you mean that? I have been assuming that there is a fundamental need for an integer type that has no fraction. There are many places like indexing where you semantically want integers. I think if speed were no issue I'd still have integers and decimal floats. > That's not gonna > happen, both because of backward compatibility, and because our > implementation of that type is "too slow". > > I don't know how fast or slow the .NET Decimal type is on its own, let > alone compared to whatever implementations of bigints and binary > floats you're using. Speed may or may not matter to your target > audience (although, in reality, you'll have multiple audiences, each > with their own irrational demands <wink>). I'm already planning on using Jim's IronPython implementation of your BigInts so I might as well port your Decimal code over for the Prothon Decimal also. The only thing I've ever stolen from Python for Prothon was your bigint code so I might as well be consistent in my thievery. :-) >> 1) An Int implemented with infinite precision integer (Python Longs) with >> the constant/__str__ form of +-NN such as 0, -123, 173_394, etc. >> >> 2) A Decimal implemented with the .Net decimal float (or the IBM decimal if >> the .Net decimal sucks too much) with the constant/__str__ form of +-NN.NN >> such as 0.0, -123.0, 173_394.0, 173394.912786765, etc. >> >> 3) A binary Float implemented with the hardware floating point with the >> constant/__str__ form of +-NN.NN+-eNN such as 0e0, -123e0, 173.39e+3, >> 2.35e-78, etc. > > Sounds decent to me, although the IBM flavor of decimal is a full > floating-point type, and some apps will definitely want to use it for > 6.02e23 thingies instead of the native binary Float. Maybe use "d" as > a "decimal float" exponent marker. I would hate to mess up my "pure" scheme but it makes sense. Will do. >> There would be no automatic conversion except the / operator would convert >> from Int to Decimal and the Math module would convert Int and Decimal >> values to Float for almost all functions (except simple ones like abs, min, >> max). > > Int * Decimal will be very common in financial apps, and the latter > are a prime use case for Decimal ("OK, they bought 200 copies of > 'Programming Prothon' at 39.98 each", etc). Numeric auto-conversions > that don't lose information are usually helpful. When I wrote that I was thinking of no automatic conversion from Decimal to Float. Conversion from Int to Decimal would make sense as you suggest. Thanks very much. I think I have a handle on the issues now. To summarize, .Net sucks (no big surprise). :-)
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4