> Sorry I wasn't clear. Like \F, I think that the best model is that of > XML, Java and (I've learned recently) Perl. There should be a single > encoding for the file. Logically speaking it should be decoded before > tokenization or parsing. Practically speaking it may be simpler to fake > this logical decoding in the implementation. I don't care how it is > implemented. Logically the model should be that any encoding declaration > affects the interpretation of the *file* not some particular construct > in the file. This is the *only* model that makes sense. > If this is too difficult to implement today then maybe we should wait on > the whole feature until someone has time to do it right. Right-o! --Guido van Rossum (home page: http://www.python.org/~guido/)
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4