> I am generating pseudo-code, which is interpreted by a C module. (With > real assembler code, it would of course be much faster, but it was just > simpler for the moment.) This has great promise! Once you have an interpreter for some kind of pseudo-code, it's always possible to tweak the interpreter or the pseudo-code to make it faster. And you can make another jump to machine code to make it a lot faster. There was a project (p2c or python2c) that tried to compile an entire Python program to C code that was mostly just calling the Python runtime C API functions. It also obtained about a factor of 2 in speed-up, but its problem was (if I recall) that even a small Python module translated into hundreds of thousands of lines of C -- think what that would do to locality. Since you have already obtained the same speedup with your approach, I think there's great promise. Count on sending in a paper for the next Python conference! > > How would you compare the > > sophistication of your type inference system to the one I've outlined > > above? > > Yours is much more complete, but runs statically. Mine works at run-time. > As explained in detail in the readme file, my plan is not to make a > "compiler" in the usual sense. I actually have no type inferences; I just > collect at run time what types are used at what places, and generate (and > possibly modify) the generated code according to that information. Very cool: a Python JIT compiler. > (More about it later.) Can't wait! --Guido van Rossum (home page: http://www.python.org/~guido/)
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4