Guido van Rossum <guido@python.org> wrote: > > FYI: Normalization is needed to make comparing Unicode > > strings robust, e.g. u"=E9" should compare equal to u"e\u0301". >=20 > Aha, then we'll see u =3D=3D v even though type(u) is type(v) and = len(u) > !=3D len(v). /F's world will collapse. :-) you're gonna do automatic normalization? that's interesting. will this make Python the first language to defines strings as a "sequence of graphemes"? or was this just the cheap shot it appeared to be? </F>
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4