A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://mail.python.org/pipermail/python-list/2001-April/111243.html below:

iterating over large files

iterating over large filesPaul Brian pbrian at demon.net
Mon Apr 9 11:07:49 EDT 2001
Dear all,

I am trying to iterate line by line over a large text file, using
readlines().

Using the sizehint argument I can stop python trying to take the entire file
into memory and so crashing, however I am having trouble finding an elegant
way of getting python to grab the next sizehint chunk of the file.

for example, given an open file called 'myfile'of 4MB, this works:

y = 1
while y <  4000:
    for line in myfile.readlines(1024):
        print y , line
    y = y + 1

myfile.close()

However it is rather ugly and assumes that I know how much data readlines()
will actually take (it depends on an internal buffer according to manaul.
Not too hot on them), and how big the file is.

So is there some way I can replace that y loop with a EOF detector or
something similar?

thank you in advance.




More information about the Python-list mailing list

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4