Tim Peters <tim.one@comcast.net> writes: > Define os.path.supports_unicode_filenames as "supports the 8 specific > filenames tested by test_pep277.py", and then that's a precisely defined > lower bound that should have nothing to do with Windows specifically. If you take this definition, it will be very difficult to determine whether this should be True or False on Unix. On some systems, it will depend on environment variables whether these are valid file names or not. Also, the test tests whether os.listdir(u".") returns Unicode file names - which is currently never the case on Unix, even though they may support the file names as arguments. So should I add os.path.listdir_with_unicode_argument_returns_unicode as well? Furthermore, how would I implement os.path.supports_unicode_filenames for ntpath? It seems that something must be exported from os. > If this is thought to be a particularly stressful set of 8 specific > file names (I can't guess -- "ascii" is the only one I can read > <wink>), then change the set of file names to a more reasonable one. > test_long.py doesn't try to create billion-digit integers, rather it > restricts itself to "unbounded ints" that any *reasonable* platform > can handle. Do likewise for Unicode filename support? PEP 277 gives Python applications access to all file names on Windows NT; this is a property unique to NT: on all other systems, you can access all file names using byte strings. For the test to test that feature, we need to chose a set of file names that are cannot all be represented as byte strings, simultaneously. Regards, Martin
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4