A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://mail.python.org/pipermail/python-dev/2001-November/018346.html below:

[Python-Dev] Caching directory files in import.c

[Python-Dev] Caching directory files in import.c [Python-Dev] Caching directory files in import.cJames C. Ahlstrom jim@interet.com
Fri, 02 Nov 2001 15:22:17 -0500
I have a new version of my zip importing code.  As before,
it reads the file names from zipfiles and records them in
a global dictionary to speed up finding zip imports.
But what about imports from directories?

Looking at the code, I saw that I could do an os.listdir(path),
and record the directory file names into the same dictionary.
Then it would not be necessary to perform a large number of
fopen()'s.  The same dictionary lookup is used instead.

Is this a good idea???

It seems it should be faster when a "large" percentage of
files in a directory are imported.  It should be slower
when only one file is imported from a directory with
many names.

I think I remember people discussing this before.  Is
the speedup real and worth the slight amount of additional
code?

JimA



RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4