Hm... that is strange, but I don't think there's anything wrong with the way I'm timing, though I agree perf/timeit would be better. I ran the benchmark a couple of times and the numbers seem to exactly line up something like one in five times; perhaps not that crazy considering they're executing nearly the same code? Anyway, benchmarking technique aside, the point is that it it works well for small lists (i.e. doesn't affect performance). On Mon, Oct 10, 2016 at 2:53 PM Nathaniel Smith <njs at pobox.com> wrote: > On Mon, Oct 10, 2016 at 1:42 PM, Elliot Gorokhovsky > <elliot.gorokhovsky at gmail.com> wrote: > > *** 10 strings *** > > F.fastsort(): 1.6689300537109375e-06 > > F.sort(): 1.6689300537109375e-06 > > I think something has gone wrong with your timing harness... > > For accurately timing microbenchmarks, you should use timeit, or > better yet Victor Stinner's perf package: > https://perf.readthedocs.io/ > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.python.org/pipermail/python-dev/attachments/20161010/23bac208/attachment.html>
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4