On Sat, Mar 20, 2010 at 4:06 AM, Mark Dickinson <dickinsm at gmail.com> wrote: > On Fri, Mar 19, 2010 at 1:17 PM, Case Vanhorsen <casevh at gmail.com> wrote: >> On Fri, Mar 19, 2010 at 3:07 AM, Mark Dickinson <dickinsm at gmail.com> wrote: >>> On Fri, Mar 19, 2010 at 9:37 AM, Mark Dickinson <dickinsm at gmail.com> wrote: >>>> Making hashes of int, >>>> float, Decimal *and* Fraction all compatible with one another, >>>> efficient for ints and floats, and not grossly inefficient for >>>> Fractions and Decimals, is kinda hairy, though I have a couple of >>>> ideas of how it could be done. >>> >>> To elaborate, here's a cute scheme for the above, which is actually >>> remarkably close to what Python already does for ints and floats, and >>> which doesn't require any of the numeric types to figure out whether >>> it's exactly equal to an instance of some other numeric type. > >> Will this change the result of hashing a long? I know that both gmpy >> and SAGE use their own hash implementations for performance reasons. I >> understand that CPython's hash function is an implementation detail, >> but there are external modules that rely on the existing hash >> behavior. > > Yes, it would change the hash of a long. > > What external modules are there that rely on existing hash behaviour? I'm only aware of gmpy and SAGE. > And exactly what behaviour do they rely on? Instead of calculating hash(long(mpz)), they calculate hash(mpz) directly. It avoids creation of a temporary object that could be quite large and is faster than the two-step process. I would need to modify the code so that it continues to produce the same result. casevh > Mark >
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4