A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://mail.python.org/pipermail/python-dev/2001-March/013553.html below:

[Python-Dev] Minutes from the Numeric Coercion dev-day session

[Python-Dev] Minutes from the Numeric Coercion dev-day sessionTim Peters tim.one@home.com
Wed, 14 Mar 2001 00:34:11 -0500
[Paul Prescod]
> David Ascher suggested during the talk that comparisons of floats could
> raise a warning unless you turned that warning off (which only
> knowledgable people would do). I think that would go a long way to
> helping them find and deal with serious floating point inaccuracies in
> their code.

It would go a very short way -- but that may be better than nothing.  Most fp
disasters have to do with "catastrophic cancellation" (a tech term, not a
pejorative), and comparisons have nothing to do with those.  Alas, CC can't
be detected automatically short of implementing interval arithmetic, and even
then tends to raise way too many false alarms unless used in algorithms
designed specifically to exploit interval arithmetic.

[Guido]
> You mean only for == and !=, right?

You have to do all comparisons or none (see below), but in the former case a
warning is silly (groundless paranoia) *unless* the comparands are "close".

Before we boosted repr(float) precision so that people could *see* right off
that they didn't understand Python fp arithmetic, complaints came later.  For
example, I've lost track of how many times I've explained variants of this
one:

Q: How come this loop goes around 11 times?

>>> delta = 0.1
>>> x = 0.0
>>> while x < 1.0:   # no == or != here
...     print x
...     x = x + delta
...

0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
>>>

A: It's because 0.1 is not exactly representable in binary floating-point.

Just once out of all those times, someone came back several days later after
spending many hours struggling to understand what that really meant and
implied.  Their followup question was depressingly insightful:

Q. OK, I understand now that for 754 doubles, the closest possible
   approximation to one tenth is actually a little bit *larger* than
   0.1.  So how come when I add a thing *bigger* than one tenth together
   ten times, I get a result *smaller* than one?

the-fun-never-ends-ly y'rs  - tim




RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4