Looking over the messages from Marc and Vladimir, Im going to add my 2c worth. IMO, Marc's position is untenable iff it can be demonstrated that the "average" program is likely to see "sparse" dictionaries, and such dictionaries have an adverse effect on either speed or memory. The analogy is quite simple - you dont need to manually resize lists or dicts before inserting (to allocate more storage - an internal implementation issue) so neither should you need to manually resize when deleting (to reclaim that storage - still internal implementation). Suggesting that the allocation of resources should be automatic, but the recycling of them not be automatic flies in the face of everything else - eg, you dont need to delete each object - when it is no longer referenced, its memory is reclaimed automatically. Marc's position is only reasonable if the specific case we are talking about is very very rare, and unlikely to be hit by anyone with normal, real-world requirements or programs. In this case, exposing the implementation detail is reasonable. So, the question comes down to: "What is the benefit to Vladmir's patch?" Maybe we need some metrics on some dictionaries. For example, maybe a doctored Python that kept stats for each dictionary and log this info. The output of this should be able to tell you what savings you could possibly expect. If you find that the average program really would not benefit at all (say only a few K from a small number of dicts) then the horse was probably dead well before we started flogging it. If however you can demonstrate serious benefits could be achieved, then interest may pick up and I too would lobby for automatic downsizing. Mark.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4