> Update of /cvsroot/python/python/dist/src/Lib > In directory usw-pr-cvs1:/tmp/cvs-serv15469 > > Modified Files: > pyclbr.py > Log Message: > Rewritten using the tokenize module, which gives us a real tokenizer > rather than a number of approximating regular expressions. > Alas, it is 3-4 times slower. Let that be a challenge for the > tokenize module. Was this just for purity, or did it fix a bug? The regexps there were close to being heroically careful, and even so it was somtimes uncomfortably slow using the class browser in IDLE (based on pyclbr), and even on a fast machine. A factor of 3 or 4 might make that unbearable. If it was for purity, note that tokenize is also based on mounds of regexp tricks <wink>.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4