A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://mail.python.org/pipermail/python-dev/2004-September/049117.html below:

[Python-Dev] open('/dev/null').read() -> MemoryError

[Python-Dev] open('/dev/null').read() -> MemoryError [Python-Dev] open('/dev/null').read() -> MemoryErrorArmin Rigo arigo at tunes.org
Mon Sep 27 22:05:33 CEST 2004
Hi,

On my system, which is admittedly an old Linux box (2.2 kernel), one test
fails:

>>> file('/dev/null').read()
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
MemoryError

This is because:

>>> os.stat('/dev/null').st_size
4540321280L

This looks very broken indeed.  I have no idea where this number comes from.  
I'd also complain if I was asked to allocate a buffer large enough to hold
that many bytes.  If we cared, we could "enhance" the file.read() method to
account for the possibility that maybe stat() lied; maybe it is desirable,
instead of allocating huge amounts of memory, to revert to something like the
following above some large threshold:

result = []
while 1:
  buf = f.read(16384)
  if not buf:
    return ''.join(result)
  result.append(buf)

Of course for genuinely large reads it's a disaster to have to allocate twice
as much memory.  Anyway I'm not sure we care about going around broken
behaviour.  I'm just wondering if os.stat() could lie in other situations too.


Armin
More information about the Python-Dev mailing list

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4