my latest changes fixed a couple of things, but broke one of the old RE tests, namely: re.match('\\x00ffffffffffffff', '\377') !=3D None or in other words, long hexadecimal escapes are cast down to 8-bit characters in RE. in SRE (after the latest change), they're cast down to the size of the engine's internal word size (currently 16 bits). is the old behaviour worth keeping? I'd rather not make the engine dependent on string types; it shouldn't really matter if you're using unicode patterns on 8-bit target strings, or vice versa. </F>
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4