After thinking more about Py_ssize_t, I'm surprised that we're not hearing about 64 bit users having a couple of major problems. If I'm understanding what was done for dictionaries, the hash table can grow larger than the range of hash values. Accordingly, I would expect large dictionaries to have an unacceptably large number of collisions. OTOH, we haven't heard a single complaint, so perhaps my understanding is off. The other area where I expected to hear wailing and gnashing of teeth is users compiling with third-party extensions that haven't been updated to a Py_ssize_t API and still use longs. I would have expected some instability due to the size mismatches in function signatures -- the difference would only show-up with giant sized data structures -- the bigger they are, the harder they fall. OTOH, there have not been any compliants either -- I would have expected someone to submit a patch to pyport.h that allowed a #define to force Py_ssize_t back to a long so that the poster could make a reliable build that included non-updated third-party extensions. In the absence of a bug report, it's hard to know whether there is a real problem. Have all major third-party extensions adopted Py_ssize_t or is some divine force helping unconverted extensions work with converted Python code? Maybe the datasets just haven't gotten big enough yet. Raymond
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4