Right, that sounds good, but there's just one thing I don't understand that's keeping me from using it. Namely, I would define a benchmark list L in my setup, and then I would have code="F=FastList(L);F.fastsort()". The problem here is I'm measuring the constructor time along with the sort time, right, so wouldn't that mess up the benchmark? Or does timeit separate the times? On Tue, Oct 11, 2016, 2:22 AM Paul Moore <p.f.moore at gmail.com> wrote: > On 11 October 2016 at 03:15, Elliot Gorokhovsky > <elliot.gorokhovsky at gmail.com> wrote: > > There's an option to provide setup code, of course, but I need to set up > > before each trial, not just before the loop. > > Typically, I would just run the benchmark separately for each case, > and then you'd do > > # Case 1 > python -m perf timeit -s 'setup; code; here' 'code; to; be; timed; here' > [Results 1] > # Case 2 > python -m perf timeit -s 'setup; code; here' 'code; to; be; timed; here' > [Results 2] > > The other advantage of doing it this way is that you can post your > benchmark command lines, which will allow people to see what you're > timing, and if there *are* any problems (such as a method lookup that > skews the results) people can point them out. > > Paul > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.python.org/pipermail/python-dev/attachments/20161011/d14bda25/attachment.html>
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4