Kristján V. Jónsson schrieb: > I can't see how this situation is any different from the re-use of > low ints. There is no fundamental law that says that ints below 100 > are more common than other, yet experience shows that this is so, > and so they are reused. There are two important differences: 1. it is possible to determine whether the value is "special" in constant time, and also fetch the singleton value in constant time for ints; the same isn't possible for floats. 2. it may be that there is a loss of precision in reusing an existing value (although I'm not certain that this could really happen). For example, could it be that two values compare successful in ==, yet are different values? I know this can't happen for integers, so I feel much more comfortable with that cache. > Rather than to view this as a programming error, why not simply > accept that this is a recurring pattern and adjust python to be more > efficient when faced by it? Surely a lot of karma lies that way? I'm worried about the penalty that this causes in terms of run-time cost. Also, how do you chose what values to cache? Regards, Martin
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4