Tim Peters wrote: > > ... > > Reviewing this before 2.0 has been on my todo list for 3+ months, and > finally got to it. Good show! I converted some of my by-hand scanners to > use lastgroup, and like it a whole lot. I know you understand why this is > Good, so here's a simple example of an "after" tokenizer for those who don't > (this one happens to tokenize REXX-like PARSE stmts): Is there a standard technique for taking a regexp like this and applying it to data fed in a little at a time? Other than buffering the data forever? That's something else I would like in a "standard Python lexer", if that's the goal. -- Paul Prescod - Not encumbered by corporate consensus Simplicity does not precede complexity, but follows it. - http://www.cs.yale.edu/homes/perlis-alan/quotes.html
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4