"M.-A. Lemburg" <mal@lemburg.com> writes: > In phase 1, the tokenizer checks the *complete file* for > non-ASCII characters and outputs single warning > per file if it doesn't find a coding declaration at > the top. Unicode literals continue to use [raw-]unicode-escape > as codec. Do you suggest that in this phase, the declared encoding is not used for anything except to complain? -1. I think people need to gain something from declaring the encoding; what they gain is that Unicode literals work right (i.e. that they really denote the strings that people see on their screen - given the appropriate editor). Regards, Martin
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4