Hi, On my system, which is admittedly an old Linux box (2.2 kernel), one test fails: >>> file('/dev/null').read() Traceback (most recent call last): File "<stdin>", line 1, in ? MemoryError This is because: >>> os.stat('/dev/null').st_size 4540321280L This looks very broken indeed. I have no idea where this number comes from. I'd also complain if I was asked to allocate a buffer large enough to hold that many bytes. If we cared, we could "enhance" the file.read() method to account for the possibility that maybe stat() lied; maybe it is desirable, instead of allocating huge amounts of memory, to revert to something like the following above some large threshold: result = [] while 1: buf = f.read(16384) if not buf: return ''.join(result) result.append(buf) Of course for genuinely large reads it's a disaster to have to allocate twice as much memory. Anyway I'm not sure we care about going around broken behaviour. I'm just wondering if os.stat() could lie in other situations too. Armin
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4