Josiah Carlson wrote: > It doesn't seem strange to you to need to encode data twice to be able > to have a usable sequence of characters which can be embedded in an > effectively 7-bit email; I'm talking about a 3.0 world where all strings are unicode and the unicode <-> external coding is for the most part done automatically by the I/O objects. So you'd be building up your whole email as a string (aka unicode) which happens to only contain code points in the range 0..127, and then writing it to your socket or whatever. You wouldn't need to do the second encoding step explicitly very often. -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | Carpe post meridiam! | Christchurch, New Zealand | (I'm not a morning person.) | greg.ewing at canterbury.ac.nz +--------------------------------------+
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4