> It seems to me that the root problem is allocation spikes of legitimate, > useful data. Perhaps then we need some sort of "test" to determine if > those are legitimate. Perhaps checking every nth (with n decreasing as > allocation bytes increases) object allocated during a "spike" could be > useful. Then delay garbage collection until x consecutive objects are > found to be garbage? > > It seems like we should be attacking the root cause rather than finding > some convoluted math that attempts to work for all scenarios. I think exactly the other way 'round. The timing of thing should not matter at all, only the exact sequence of allocations and deallocations. I trust provable maths much more than I trust ad-hoc heuristics, even if you think the math is convoluted. > On a side note, the information about not GCing on string objects is > interesting? Is there a way to override this behavior? I think you misunderstand. Python releases unused string objects just fine, and automatically. It doesn't even need GC for that. Regards, Martin
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4