"M.-A. Lemburg" <mal@lemburg.com> writes: > The standard reasoning behind using overallocation for memory > management is that typical modern malloc()s don't really allocate > the memory until it is used (you know this anyway...), That is not true. Each malloc implementation I know will always iterate through the freelist to find an appriately-sized chunk, and go to the OS if it doesn't find one. Now, the *OS* might implement such allocations as a no-op, but on most hardware, it can do so only in units of memory pages (e.g. 4k). Most strings are smaller than a page, so if you allocate lots of strings, every page allocated from the system will be used as well (atleast to fill in the string header). With overallocation, you will indeed overallocate real pages, and consume real memory. > This makes overallocation ideal for the case where you don't know > the exact size in advance but where you can estimate a reasonable > upper bound. No. The overallocation has a real cost in terms of memory consumption. > As Martin's benchmark showed, the counting strategy is > faster for small chunks and this is probably due to the > fact that pymalloc manages these. I doubt this claim. > Since pymalloc cannot know that an algorithm wants to > use overallocation as memory allocation strategy, it > would probably help to find a way to tell pymalloc > about this fact. It could then either redirect the > request to the system malloc() or use a different > malloc strategy for these chunks. That won't help. Regards, Martin
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4