On 11 October 2016 at 03:15, Elliot Gorokhovsky <elliot.gorokhovsky at gmail.com> wrote: > There's an option to provide setup code, of course, but I need to set up > before each trial, not just before the loop. Typically, I would just run the benchmark separately for each case, and then you'd do # Case 1 python -m perf timeit -s 'setup; code; here' 'code; to; be; timed; here' [Results 1] # Case 2 python -m perf timeit -s 'setup; code; here' 'code; to; be; timed; here' [Results 2] The other advantage of doing it this way is that you can post your benchmark command lines, which will allow people to see what you're timing, and if there *are* any problems (such as a method lookup that skews the results) people can point them out. Paul
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4