[David Goodger] > ... > For the Docstring Processing System, I am looking into gleaning > information from the abstract syntax tree. > ... > - packages > - modules > - module attributes (+ values) > - classes (+ inheritance) > - class attributes (+ values) > - instance attributes (+ values) > - methods (+ formal parameters) > - functions (+ formal parameters) > ... > I'd be very interested in pooling efforts to make this easier. I > know almost nothing about ASTs now, but that could change in a > hurry :-). Let me suggest you don't really want an AST -- you want an object model for Python source that answers the questions above directly. An AST may be an effective (under the covers) implementation technique to get such info, but if you don't want to wait for people to argue about "the right" AST and "the right" tree-based query language to make it better than completely useless <wink>, you can answer all the stuff above by building on tokenize.py now. BTW, exploiting generators in 2.2 can make a tokenize-based approach much more pleasant than before; see the newish Tools/scripts/cleanfuture.py (in CVS) for an example of how easily 1-token lookahead parsing can be coded now.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4