I've run into a problem with large files using Python 2.1.2 and a Linux 2.4.9 box. We've got a large file -- almost 6GB -- that Python chokes on even though regular shell tools seem to be fine. In particular, os.stat() of the file fails with EOVERFLOW and open() of the file fails with EFBIG. The stat() failure is really bad because it means os.path.exists() returns false. strace tells me that other tools open the file passing O_LARGEFILE, but Python does not. (They pass it even for small files.) I can't find any succient explanation of O_LARGEFILE, but Google turns up all sorts of pages that mention it. It looks like the right way to open large files, but it only seems to be defined in <asm/fcntl.h> on the Linux box in question. I haven't had any luck searching for a decent way to invoke stat() and have it be prepared for a very large file. I think Python is definitely broken here. Can anyone offer any clues or pointers to documentation? Better yet, a fix. I'm happy to help integrate and test it. Jeremy
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4