[Martin v. Löwis] > ... > For example, would it be possible to automatically fall back > to binary floating point if the result cannot be represented > exactly (which would be most divide operations)? Would that > help? It's a puzzle. .NET Decimal is really more a fixed-point type than a floating-point type. It consists of a sign bit, a 96-bit binary integer, and a "scale factor" in 0..28, which is the power of 10 by which the integer is conceptually divided. The largest positive representable value is thus 2**96 == 79228162514264337593543950336. The smallest positive non-zero representable value is 1/10**28. So for something like 1/3, you get about 28 decimal digits of good result, which is much better than you can get with an IEEE double. OTOH, something like 1/300000000000000000000 starts to make the "gradual underflow" nature of Decimal apparent: for numbers with absolute value less than 1, the number of digits you get decreases the smaller the absolute value, until at 1e-28 you have only 1 bit of precision (and, e.g., 1.49999e-28 "rounds to" 1e-28). So it's a weird arithmetic as you approach its limits. But binary FP is too, and so is IBM's decimal spec. A primary difference is that binary FP has a much larger dynamic range, so you don't get near the limits nearly as often; and IBM's decimal has a gigantic dynamic range (the expectation is that essentially no real app will get anywhere near its limits, unless the app is buggy).
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4