On Tue, Sep 18, 2018 at 1:31 AM, Antoine Pitrou <solipsis at pitrou.net> wrote: > No idea. In my previous experiments with module import speed, I > concluded that executing module bytecode generally was the dominating > contributor, but that doesn't mean loading bytecode is costless. > My observations might not be so different. On a large application, we measured ~25-30% of start-up time being spent in the loading of compiled bytecode. That includes: probing the filesystem, reading the bytecode off disk, allocating heap storage, and un-marshaling objects into the heap. Making that percentage go to ~0% using this change does not make the non-import parts of our module body functions execute faster. It does create a greater opportunity for the application developer to do less work in module body functions which is where the largest start-up time gains are now likely to happen. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.python.org/pipermail/python-dev/attachments/20180918/9b8807e0/attachment.html>
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4