On Jan 4, 2008 11:31 AM, Tim Peters <tim.peters at gmail.com> wrote: > [skip at pobox.com] > > Thanks for the pointer. Given that it's [round-to-even[ been an ASTM > > standard since 1940 and apparently in fairly common use since the > > early 1900s, I wonder why it's not been more widely used in the past > > in programming languages. > > Because "add a half and chop" was also in wide use for even longer, is > also (Wikipedia notwithstanding) part of many standards (for example, > the US IRS requires it if you do your taxes under the "round to whole > dollars" option), and-- probably the real driver --is a little cheaper > to implement for "normal sized" floats. Curiously, round-to-nearest > can be unboundedly more expensive to implement in some obscure > contexts when floats can have very large exponents (as they can in > Python's "decimal" module -- this is why the proposed decimal standard > allows operations like "remainder-near" to fail if applied to inputs > that are "too far apart": > > http://www2.hursley.ibm.com/decimal/daops.html#footnote.8 > > The "secret reason" is that it can be unboundedly more expensive to > determine the last bit of the quotient (to see whether it's even or > odd) than to determine an exact remainder). Wow. Do you have an opinion as to whether we should adopt round-to-even at all (as a default)? -- --Guido van Rossum (home page: http://www.python.org/~guido/)
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4