Terry Reedy wrote: > On 8/26/2011 8:23 PM, Antoine Pitrou wrote: > >>> I would only agree as long as it wasn't too much worse >>> than O(1). O(log n) might be all right, but O(n) would be >>> unacceptable, I think. >> >> It also depends a lot on *actual* measured performance > > Amen. Some regard O(n*n) sorts to be, by definition, 'worse' than > O(n*logn). I even read that in an otherwise good book by a university > professor. Fortunately for Python users, Tim Peters ignored that > 'wisdom', coded the best O(n*n) sort he could, and then *measured* to > find out what was better for what types and lengths of arrays. So not we > have a list.sort that sometimes beats the pure O(nlog) quicksort of C > libraries. A nice story, but Quicksort's worst case is O(n*n) too. http://en.wikipedia.org/wiki/Quicksort timsort is O(n) in the best case (all items already in order). You are right though about Tim Peters doing extensive measurements: http://bugs.python.org/file4451/timsort.txt If you haven't read the whole thing, do so. I am in awe -- not just because he came up with the algorithm, but because of the discipline Tim demonstrated in such detailed testing. A far cry from a couple of timeit runs on short-ish lists. -- Steven
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4