The pyperformance benchmark suite had micro benchmarks on function calls, but I removed them because they were sending the wrong signal. A function call by itself doesn't matter to compare two versions of CPython, or CPython to PyPy. It's also very hard to measure the cost of a function call when you are using a JIT compiler which is able to inline the code into the caller... So I removed all these stupid "micro benchmarks" to a dedicated Git repository: https://github.com/vstinner/pymicrobench Sometimes, I add new micro benchmarks when I work on one specific micro optimization. But more generally, I suggest you to not run micro benchmarks and avoid micro optimizations :-) Victor 2018-07-10 0:20 GMT+02:00 Jeroen Demeyer <J.Demeyer at ugent.be>: > Here is an initial version of a micro-benchmark for C function calling: > > https://github.com/jdemeyer/callbench > > I don't have results yet, since I'm struggling to find the right options to > "perf timeit" to get a stable result. If somebody knows how to do this, help > is welcome. > > > Jeroen. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4