This just went by this morning on reddit's /r/programming. It's a paper that analyzed Python--among a handful of other languages--to answer the question "are branch predictors still that bad at the big switch statement approach to interpreters?" Their conclusion: no. Our simulations [...] show that, as long as the payload in the bytecode remains limited and do not feature significant amount of extra indirect branches, then the misprediction rate on the interpreter can be even become insignificant (less than 0.5 MPKI). (MPKI = missed predictions per thousand instructions) Their best results were on simulated hardware with state-of-the-art prediction algorithms ("TAGE" and "ITTAGE"), but they also demonstrate that branch predictors in real hardware are getting better quickly. When running the Unladen Swallow test suite on Python 3.3.2, compiled with USE_COMPUTED_GOTOS turned off, Intel's Nehalem experienced an average of 12.8 MPKI--but Sandy Bridge drops that to 3.5 MPKI, and Haswell reduces it further to a mere *1.4* MPKI. (AFAICT they didn't compare against Python 3.3.2 using computed gotos, either in terms of MPKI or in overall performance.) The paper is here: https://hal.inria.fr/hal-01100647/document I suppose I wouldn't propose removing the labels-as-values opcode dispatch code yet. But perhaps that day is in sight! //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.python.org/pipermail/python-dev/attachments/20150810/673b6c47/attachment.html>
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4