A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://mail.python.org/pipermail/python-dev/2010-September/104203.html below:

[Python-Dev] issue2180 and using 'tokenize' with Python 3 'str's

[Python-Dev] issue2180 and using 'tokenize' with Python 3 'str's [Python-Dev] issue2180 and using 'tokenize' with Python 3 'str'sBenjamin Peterson benjamin at python.org
Tue Sep 28 05:27:06 CEST 2010
2010/9/27 Meador Inge <meadori at gmail.com>:
> which, as seen in the trace, is because the 'detect_encoding' function in
> 'Lib/tokenize.py' searches for 'BOM_UTF8' (a 'bytes' object) in the string
> to tokenize 'first' (a 'str' object).  It seems to me that strings should
> still be able to be tokenized, but maybe I am missing something.
> Is the implementation of 'detect_encoding' correct in how it attempts to
> determine an encoding or should I open an issue for this?

Tokenize only works on bytes. You can open a feature request if you desire.



-- 
Regards,
Benjamin
More information about the Python-Dev mailing list

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4