On Tue, 17 Apr 2001 22:35:56 GMT, news-server.columbus.rr.com <antediluvianistic at columbus.rr.com> wrote: > Has everyone contemplated upon creating a python 'compiler', which can > produce a self-contained binary executable? (for when speed is an absolute > neccessity) ..or does such a thing exist already? I'd love to give game > development in Photon a run, and feel that it's simply not fast enough to > perform some operations. > > Any thoughts/comments are appreciated. > Brian > > I started work on a project of this type. I implemented basic type information (ints, floats, sequences, mappings, functions, the last three with parametric types), the concept of "type widening" (a type which contains at least the union of the two given types) and the concept of the result of an operation such as addition, subscripting, slicing, and calling. (actually, I think slicing was not yet implemented) Then I implemented a type inference engine which worked on Python bytecodes. At each bytecode, the types of each local and each active stack position are determined. In the presence of looping constructs, the code is repeated until the type information "settles" (So it's possible to write functions which currently take infinite time in the inferencer. Example: def f(t): x = None for i in range(t): x = [x] return x ). About a dozen bytecodes were actually implemented, not the full set. Since a function's result type is parametric on its input type, calling the inference function with a bytecode function and proposed arguments. Thus, given def f(x): return x*x you might propose that f be given an integer: >>> infer(f, (IntType()) IntType or a float: >>> infer(f, (IntType()) FloatType Another concept I introduced is the "solidification" of a function. Certain globals and module contents can be marked as "solid", and loads of them will be hoisted into constant loads. Typically, builtins like 'range', the 'math' module, and its contents would be marked as "solid". This sidesteps the issue of whether the type inference is correct given the possible future modification of range(), and is a small optimization. (These builtins are given type information by hand) Next I implemented a native code generator using GNU Lightning for JIT compilation (Emitting C code would be another alternative). Using the type information, it was hoped that the following optimizations could be performed: * Float and integer arithmetic done inline * Calls from JIT functions to JIT functions as native calls * Overhead of the bytecode interpreter loop gone I implemented a similar subset of bytecodes in the JIT compiler as in the inference system. This system can't JIT compile any useful code yet, and much of it has become outdated by bytecode changes since Python 1.5.2. Exception handling was another big issue which I hadn't made so much as a nod to. If anybody's interested in this code, I'll be happy to send you what I have. It's covered under a Python-style license. Jeff
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4