> On 04 May 2001, M.-A. Lemburg said: > > Gustavo Niemeyer submitted a patch which adds a tokenize like > > method to strings and Unicode: > > > > "one, two and three".tokenize([",", "and"]) > > -> ["one", " two ", "three"] > > > > I like this method -- should I review the code and then check it in ? > > I concur with /F: -1 because you can do it easily with re.split(). -1 also. --Guido van Rossum (home page: http://www.python.org/~guido/)
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4