Hi, i've a storage engine that stores a lot of files (e.g. > 10.000) in one path. Running the code under cProfile, I found that with a total CPU-time of 1,118 seconds, 121 seconds are spent in 27.013 calls to open(). The number of calls is not the problem; however I find it *very* discomforting that Python spends about 2 minutes out of 18 minutes of cpu time just to get a file-handle after which it can spend some other time to read from them. May this be a problem with the way Python 2.7 gets filehandles from the OS or is it a problem with large directories itself? Best regards Lukas
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4