On Mon, 16 Jul 2012 02:00:58 +1000 Chris Angelico <rosuav at gmail.com> wrote: > On Mon, Jul 16, 2012 at 1:55 AM, Steven D'Aprano <steve at pearwood.info> wrote: > > (I expect the difference in behaviour is due to the default ulimit under > > Debian/Mint and RedHat/Fedora systems.) > > Possibly also virtual memory settings. Allocating gobs of memory with > a huge page file slows everything down without raising an error. > > And since it's possible to have non-infinite but ridiculous-sized > iterators, I'd not bother putting too much effort into protecting > infinite iterators - although the "huge but not infinite" case is, > admittedly, rather rarer than either "reasonable-sized" or "actually > infinite". In the real world, I'm sure "huge but not infinite" is much more frequent than "actually infinite". Trying to list() an infinite iterator is a programming error, so it shouldn't end up in production code. However, data that grows bigger than expected (or that gets disposed of too late) is quite a common thing. <hint> When hg.python.org died of OOM two weeks ago, it wasn't because of an infinite iterator: http://mail.python.org/pipermail/python-committers/2012-July/002084.html </hint> Regards Antoine. -- Software development and contracting: http://pro.pitrou.net
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4