The current PyLong implementation represents arbitrary precision integers in units of 15 or 30 bits. I presume the purpose is to avoid overflow in addition , subtraction and multiplication. But compilers these days offer intrinsics that allow one to access the overflow flag, and to obtain the result of 64 bit multiplication as a 128 bit number. Or at least on x86-64, which is the dominant platform. Any reason why it is not done? If it is only because no one bothers, I may be able to do it. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.python.org/pipermail/python-dev/attachments/20170703/0d1c6c09/attachment.html>
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4