[/F] > > try using "%.17g" (show all significant digits) instead of "%f" > (fudge it) [Stanley Krute] > Aha ! Python defaults to Truth ! Nope: repr(float) gives a closer *approximation* to machine truth than does str(float), and just *enough* of the truth so that eval(repr(x)) == x. In general, you cannot rely on eval(str(x)) == x (in fact, it's unusual to get the same float back if you go thru str() conversion first). If you wanted the full truth, then e.g. you would get this instead: >>> 0.1 0.1000000000000000055511151231257827021181583404541015625 >>> That very long decimal number is the exact value of the binary approximation to 0.1 stored in the machine by your HW -- and assuming that your platform C library does best-possible conversion of the string "0.1" to an IEEE-754 double precision number to begin with. > ... > The inaccuracies exist 15 or so places to the right of the decimal > point. In general, since there 53 bits of precision in an IEEE-double, the tiniest errors creep into the 53rd significant bit. An error of 1 in 2**53 is an error of 1 in 10**x, for *some* value of x. A little math reveals x = log10(2**53) = 53 * log10(2) >>> import math >>> 53 * math.log10(2) 15.954589770191003 >>> So when viewed as decimal again, even the tiniest error will show up in the 15th or 16th significant decimal digit. Since that's what you've already observed, you have reason to believe it <wink>. it's-not-insane-but-it-is-maddening-ly y'rs - tim
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4