Guido van Rossum <guido@python.org> writes: > What about my other objections? Besides "breaks binary compatibility", the only other objection was: > Also could cause lots of compilation warnings when user code stores > the result into an int. True; this would be a migration issue. To be safe, we probably would define Py_size_t (or Py_ssize_t). People on 32-bit platforms would not notice the problems; people on 64-bit platforms would soon provide patches to use Py_ssize_t in the core. That is a lot of work, so it requires careful planning, but I believe this needs to be done sooner or later. Given MAL's and your response, I already accepted that it would likely be done rather later than sooner. I don't agree with MAL's objection > Wouldn't it be easier to solve this particular problem in > the type used for mmapping files ? Sure, it would be faster and easier, but that is the dark side of the force. People will find that they cannot have string objects with more than 2Gib one day, too, and, perhaps somewhat later, that they cannot have more than 2 milliard objects in a list. It is unlikely that the problem will go away, so at some point, all the problems will become pressing. It is perfectly reasonable to defer the binary breakage to that later point, except that probably more users will be affected in the future than would be affected now (because of the current rareness of 64-bit Python installations). Regards, Martin
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4