> On maandag, juli 29, 2002, at 10:02 , Guido van Rossum wrote: > >> For readlines() I think this is the right thing to do, but > >> xreadlines() and file iterators could actually "do the right > >> thing" and revert to a slower scheme if the underlying stream is > >> unbuffered? Or is this overkill? > > > > What's the use case for -u again? I thought it was pretty rare that > > this was needed, and when it's needed, the program probably knows that > > it's needed, and can simply avoid using xreadlines. > > You're probably right (that's also why I wondered whether this > was overkill). I was just triggered by the phrasing of the help > message. The only use case I can think of is a user typing interactive input into a program that does something like for num, line in enumerate(sys.stdin): print "%4d. %s" % (num, line.rstrip("\n")) The funny thing is that when you use the right idiom (calling readline()), -u isn't even needed. :-) The problem is that the stdio, and hence the file object, doesn't have a way to read with the buffering semantics of the read() system call. If you ask the system call for 1000 bytes, on an input tty or a pipe, it will give you less as soon as some but not all requested bytes are available. But fread() keeps asking for more until it either sees EOF or has a full buffer. The only way to fix this would be to step outside the stdio abstraction and use read() directly, both in readlines() and in xreadlines(). As I've said before, let's do that when we feel like rewriting the entire I/O system. --Guido van Rossum (home page: http://www.python.org/~guido/)
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4