Skip Montanaro wrote: > The unicode() builtin accepts an optional third argument, errors, which > defaults to "strict". According to the docs if errors is set to "ignore", > decoding errors are silently ignored. I seem to still get the occasional > UnicodeError exception, however. I'm still trying to track down an actual > example (it doesn't happen often, and I hadn't wrapped unicode() in a > try/except statement, so all I saw was the error raised, not the input > string value). The error argument is passed on to the codec you request. It's the codec that decides how to implement the error handling, not the unicode() builtin, so if you're seeing errors with 'ignore' then this is probably the result of some problem in the codec. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ Meet us at EuroPython 2002: http://www.europython.org/
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4