> > I think that the right solution of this issue is generalizing the import > machinery and allowing it to cache not just files, but arbitrary chunks of > code. We already use precompiled bytecode files for exactly same goal -- > speed up the startup by avoiding compilation. This solution could be used > for caching other generated code, not just namedtuples. > I thought about adding C implementation based on PyStructSequence. But I like Jelle's approach because it may improve performance on all Python implementation. It's reducing source to eval. It shares code objects for most methods. (refer https://github.com/python/cpython/pull/2736#issuecomment-316014866 for quick and dirty bench on PyPy) I agree that template + eval pattern is nice for readability when comparing to other meta-programming magic. And code cache machinery can improve template + eval pattern in CPython. But namedtuple is very widely used. It's loved enough to get optimized for not only CPython. So I prefer Jelle's approach to adding code cache machinery in this case. Regards, INADA Naoki <songofacandy at gmail.com>
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4