On Mon, Mar 22, 2010 at 10:22 AM, Alexander Belopolsky <alexander.belopolsky at gmail.com> wrote: > On Mon, Mar 22, 2010 at 1:56 PM, Raymond Hettinger > <raymond.hettinger at gmail.com> wrote: >> >> On Mar 22, 2010, at 10:00 AM, Guido van Rossum wrote: >> >> Decimal + float --> Decimal >> >> If everybody associated with the Decimal implementation wants this I >> won't stop you; as I repeatedly said my intuition about this one (as >> opposed to the other two above) is very weak. >> >> That's my vote. > > I've been lurking on this thread so far, but let me add my +1 to this > option. My reasoning is that Decimal is a "better" model of Real than > float and mixed operations should not degrade the result. "Better" > can mean different things to different people, but to me the tie > breaker is the support for contexts. I would not want precision to > suddenly change in the middle of calculation I add 1.0 instead of 1. > > This behavior will also be familiar to users of other "enhanced" > numeric types such as NumPy scalars. Note that in the older Numeric, > it was the other way around, but after considerable discussion, the > behavior was changed. Thanks, "better" is a great way to express this. -- --Guido van Rossum (python.org/~guido)
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4