"Fred L. Drake, Jr." wrote: > > Now, if we could do that for unicodedata and ucnhash, a lot more > people would be happy! > Marc-Andre, Bill: Would it be reasonable to have perfect_hash.py > actually compress the text strings used for the character names? > Since there's such a limited alphabet in use, something special > purpose would probably be easy and do a lot of good. When checking > the lookup, you could easily decode the string in the table to do the > comparison. I don't have the time to look into this, sorry. Other things are more important now, like changing the handling of the 's' and 't' parser markers for Unicode from UTF-8 to default encoding. This will complete the move from a fixed internal encoding to a locale dependent setting and should fix most problems people have noticed on their platforms. BTW, does the XML package already use the builtin Unicode support ? -- Marc-Andre Lemburg ______________________________________________________________________ Business: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4