Tim Peters <tim.one@home.com>: > Not all scientific work consists of predicting the weather with inputs known > to half a digit on a calm day <wink>. Knuth gives examples of > ill-conditioned problems where resorting to unbounded rationals is faster > than any known stable f.p. approach (stuck with limited precision) -- think, > e.g., chaotic systems here, which includes parts of many hydrodynamics > problems in real life. Hmmm...good answer. I still believe it's the case that real-world measurements max out below 48 bits or so of precision because the real world is a noisy, fuzzy place. But I can see that most of the algorithms for partial differential equationss would multiply those by very small or very large quantities repeatedly. The range-doubling trick for catching divergences is neat, too. So maybe there's a market for 128-bit floats after all. I'm still skeptical about how likely those applications are to influence the architecture of general-purpose processors. I saw a study once that said heavy-duty scientific floating point only accounts for about 2% of the computing market -- and I think it's significant that MMX instructions and so forth entered the Intel line to support *games*, not Navier-Stokes calculations. That 2% will have to get a lot bigger before I can see Intel doubling its word size again. It's not just the processor design; the word size has huge implications for buses, memory controllers, and the whole system architecture. -- <a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a> The United States is in no way founded upon the Christian religion -- George Washington & John Adams, in a diplomatic message to Malta.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4