Gerson Kurz wrote > Ok, here is a 55 bytes sourcecode: > > b = {} > for i in range(1000000): > b[i] = [0] * 60 > > --> segfault with cygwin. > as a follow up, here is an interesting behaviour when used with 2.3: Python 2.3a0 (#29, Oct 7 2002, 19:54:53) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> b = {} [3319 refs] >>> for i in range(1000000): ... b[i]=[i]*60 ... [62003414 refs] >>> for k in range(1000000): ... del b[k] ... [1003419 refs] >>> print b[0] Traceback (most recent call last): File "<stdin>", line 1, in ? KeyError: 0 [1003458 refs] >>> print len(b.keys()) 0 [1003621 refs] The funny thing is, the memory allocated by python_d.exe *INCREASES* when I do the del b[k]-bit thingy. OK, so I guess its time to do a gc: >>> import gc [1003753 refs] >>> gc.collect() 0 [1003758 refs] >>> gc.isenabled() 1 [1003758 refs] >>> Which frees up, like, nothing. Hm. When I do that with PythonWin 2.2.1 (#34, Apr 15 2002, 09:51:39) [MSC 32 bit (Intel)] on win32. ... >>> b = {} >>> for i in range(100000): ... b[i] = [0] * 60 ... >>> for k in range(100000): ... del b[k] ... >>> (higher numbers throw MemoryException) I get a much more expected behaviour, del b[k] frees up about 50% of the memory used (50 meg total), although gc.collect() likewise has no measurable effect.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4