> > Decimal floating-point has almost all the pitfalls of binary > > floating-point, yet I do not see anyone arguing against decimal > > floating-point on the basis that it makes the pitfalls less apparent. > Actually, decimal floating point takes care of two of the pitfalls of > binary floating point: > * binary/decimal conversion > * easily modified precision > When people are taught decimal arithmetic, they're usually taught the > problems with it, so they aren't surprised. (e.g. 1/3) But doesn't that just push the real problems further into the background, making them more dangerous? <0.1 wink> For example, be it, binary or decimal, floating-point addition is still not associative, so even such a simple computation as a+b+c requires careful thought if you wish the maximum possible precision. Why are you not arguing against decimal floating-point if your goal is to expose users to the problems of floating-point as early as possible?
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4