On Sat, 05 Oct 2013 22:11:43 +0200 Georg Brandl <g.brandl at gmx.net> wrote: > Am 05.10.2013 21:42, schrieb Serhiy Storchaka: > > Please remember me, what was common decision about CPython-only > > optimizations which change computation complexity? I.g. constant > > amortization time of += for byte objects and strings, or linear time of > > sum() for sequences? > > This appears to be about changeset 499a96611baa: > > Issue #19087: Improve bytearray allocation in order to allow cheap popping of > data at the front (slice deletion). > > I think the best way to describe the CPython strategy is that we don't like to > optimize things that both have an idiomatic solution already (see str.join) and > can't be replicated easily in other implementations. Agreed with this. I'll also point out that algorithmic complexity is only an aspect of performance. For example, providing a fast C implementation of decimal is a game changer even though algorithmic complexity may have remained identical. Regards Antoine.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4