> So how about this: > > In phase 1, the tokenizer checks the *complete file* for > non-ASCII characters and outputs single warning > per file if it doesn't find a coding declaration at > the top. Unicode literals continue to use [raw-]unicode-escape > as codec. > > In phase 2, we enforce ASCII as default encoding, i.e. > the warning will turn into an error. The [raw-]unicode-escape > codec will be extended to also support converting Unicode > to Unicode, that is, only handle escape sequences in this > case. +1. --Guido van Rossum (home page: http://www.python.org/~guido/)
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4