On 30Jan2016 0645, Serhiy Storchaka wrote: > $ ./python -m timeit -s "import codecs; from encodings.cp437 import > decoding_table" -- "codecs.charmap_build(decoding_table)" > 100000 loops, best of 3: 4.36 usec per loop > > Getting rid from charmap_build() would save you at most 4.4 microseconds > per encoding. 0.0005 seconds if you have imported *all* standard encodings! Just as happy to be proven wrong. Perhaps I misinterpreted my original profiling and then, embarrassingly, ran with the result for a long time without retesting. > And how you expected to store encoding_table in more efficient way? There's nothing inefficient about its storage, but as it does not change it would be trivial to store it statically. Then "building" the map is simply obtaining a pointer into an already loaded memory page. Much faster than building it on load, but both are clearly insignificant compared to other factors. Cheers, Steve
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4