Douglas Alan wrote: >All this should come as no shock -- 100 times slower for an >interpreted language is just par for a good interpreter. Before the >spiffy Self compiler, state of the art was 10x slower than C for a >really bitchin' highly optimized Smalltalk compiler. I have a package called PyDaylight which puts a deep Python layer (with OO, exceptions, iterators, etc) on top of a C library for chemical informatics. I've done a lot of timing tests on the library to see how the Python code compares to morally equivalent C code. In most cases it's about a factor of ten. The fastest case is a 50% slowdown for code which is mostly in the extension - and it should be faster now that Python has xreadlines. The slowest case was a factor of 40 for code which did a lot of array lookup and used the __getattr__ hooks My timing tests have put Python code anywhere from 50% slower than C code (when most of the work was done in a C extension) to 40x. This slowest case was for code that did a lot of list lookups instead of pointer increments, and used the object layer heavily, which is translates attribute lookups to the C level via getattr calls. Most code is O(10x) slower. Of course, to balance it out in most cases the code itself is an order of magnitude fewer lines of code and an order of magnitude easier to read. So I don't agree with your estimate of 100x slower, at least for the domain of programs I've investigated. Andrew dalke at acm.org
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4