On Wed, 22 Aug 2012 03:25:21 +1000 Steven D'Aprano <steve at pearwood.info> wrote: > On 21/08/12 23:04, Victor Stinner wrote: > > > I don't like the timeit module for micro benchmarks, it is really > > unstable (default settings are not written for micro benchmarks). > [...] > > I wrote my own benchmark tool, based on timeit, to have more stable > > results on micro benchmarks: > > https://bitbucket.org/haypo/misc/src/tip/python/benchmark.py > > I am surprised, because the whole purpose of timeit is to time micro > code snippets. > > If it is as unstable as you suggest, and if you have an alternative > which is more stable and accurate, I would love to see it in the > standard library. In my experience timeit is stable enough to know whether a change is significant or not. No need for three-digit precision when the question is whether there is at least a 10% performance difference between two approaches. Regards Antoine. -- Software development and contracting: http://pro.pitrou.net
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4