In c.l.p, Henry Thompson wondered why printing would ignore __unicode__. Consider this: import codecs stream = codecs.open("/tmp/bla","w", encoding="cp1252") class Foo: def __unicode__(self): return u"\N{EURO SIGN}" foo = Foo() print >>stream, foo This succeeds, but /tmp/bla now contains <__main__.Foo instance at 0x4026e68c> He argues that it instead should invoke __unicode__, similar to invoking automatically __str__ when writing to a byte stream. I agree that this is desirable, but I wonder what the best approach would be: A. Printing tries __str__, __unicode__, and __repr__, in this order. B. A file indicates "unicode-awareness" somehow. For a Unicode-aware file, it tries __unicode__, __str__, and __repr__, in order. C. A file indicates that it is "unicode-requiring" somehow. For a unicode-requiring file, it tries __unicode__; if that fails, it tries __repr__ and converts the result to Unicode. Which of these, if any, would be most Pythonish? Regards, Martin
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4