On Tue, 16 Nov 1999 09:39:20 +0100, you wrote: >1) codes written according to the "data > consumer model", instead of the "stream" > model. > > class myDecoder: > def __init__(self, target): > self.target = target > self.state = ... > def feed(self, data): > ... extract as much data as possible ... > self.target.feed(extracted data) > def close(self): > ... extract what's left ... > self.target.feed(additional data) > self.target.close() > Apart from feed() instead of write(), how is that different from a Java-like Stream writer as Guido suggested? He said: >Andy's file translation example could then be written as follows: > ># assuming variables input_file, input_encoding, output_file, ># output_encoding, and constant BUFFER_SIZE > >f = open(input_file, "rb") >f1 = unicodec.codecs[input_encoding].stream_reader(f) >g = open(output_file, "wb") >g1 = unicodec.codecs[output_encoding].stream_writer(f) > >while 1: > buffer = f1.read(BUFFER_SIZE) > if not buffer: > break > f2.write(buffer) > >f2.close() >f1.close() > >Note that we could possibly make these the only API that a codec needs >to provide; the string object <--> unicode object conversions can be >done using this and the cStringIO module. (On the other hand it seems >a common case that would be quite useful.) - Andy
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4