Hi, stefan brunthaler, 22.07.2010 13:22: > during the last year, I have developed a couple of quickening-based > optimizations for the Python 3.1 interpreter. As part of my PhD > programme, I have published a first technique that combines quickening > with inline caching at this year's ECOOP, and subsequently extended > this technique to optimize several load instructions as well as > eliminate redundant reference counting operations from instructions, > which has been accepted for publication for an upcoming conference. > [...] > I wonder whether you would be interested in integrating these > optimizations with the Python 3 distribution, hence this mail. I could > send copies of the papers, as well as provide my prototype source code > to interested members of the python development community. I'm absolutely interested, although not for the CPython project but for Cython. I wonder how you do inline caching in Python if the methods of a type can be replaced by whatever at runtime. Could you elaborate on that? Based on what information do you switch between inlining states? Or do you restrict yourself to builtin types? That might be worth it already, just think of list.append(). We have an optimistic optimisation for object.append() in Cython that gives us massive speed-ups in loops that build lists, even if we don't know at compile time that we are dealing with lists. Stefan
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4