Hello everybody, Just a quick update on the Python Specializing Compiler. I started to work on a quite different prototype one month ago. This is all very experimental, so I will delay the technical explanations for some more time. The motivations are as follows: the previous implementation loosed WAY too much time in compilation proper: around 0.2 seconds for a 10-15 lines function! I am not sure that writing Psyco in very optimized C would have been enough to be of much help in anything but the most particular cases. Because of this I am re-thinking the way it could work. While I did not abandon the base ideas, I switched to a slightly different approach which, I think, could produce slightly less efficient machine code but incredibly more quickly. This work is based on research articles describing techniques to produce code at run-time [XXX references]. With the execution of only 4 to 6 processor instructions, they output one instruction of the dynamic code. Compare it with a typical C compiler, which takes thousands times more clock cycles to run than the number of instructions it emits! Or course, I do not question that it outputs more optimized code than what can be done with 4 to 6 instructions; however, this has the great advantage that even if some dynamically produced code is only used a few times, even only once, the overhead of having produced it in the first place is very low -- maybe even lower than the time that would have been required to load it from a disk file if it had been precompiled, and in the worst cases similar to the current Python VM interpreter. A bientot, Armin.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4