> True; "u#" does exactly the same as "s#" -- it interprets the > input as binary buffer. It doesn't do exactly the same. If s# is applied to a Unicode object, it transparently invokes the default encoding, which is sensible. If u# is applied to a byte string, it does not apply the default encoding. Instead, it interprets the string "as-is". I cannot see an application where this is useful, but I can see many applications where it is clearly wrong. IMO, u# cannot and should not be symmetric to s#. Instead, it should accept just Unicode objects, and raise TypeErrors for everything else. Regards, Martin
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4