On 04 May 2001, M.-A. Lemburg said: > Gustavo Niemeyer submitted a patch which adds a tokenize like > method to strings and Unicode: > > "one, two and three".tokenize([",", "and"]) > -> ["one", " two ", "three"] > > I like this method -- should I review the code and then check it in ? I concur with /F: -1 because you can do it easily with re.split(). Greg -- Greg Ward - Unix bigot gward@python.net http://starship.python.net/~gward/ I hope something GOOD came in the mail today so I have a REASON to live!!
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4