Is your feature request related to a problem? Please describe.
CPD tokenizers are under-tested, and hard to test. Most tests just do in essence lex(someSource)
, without actually checking what the result look like. Tests that do check the tokens lex extremely short files (less than 10 tokens), which is unrealistic. Every language module uses their own testing utilities, there's no consistency across modules.
Having no easy way to inspect CPD output, we also cannot know whether tokenizers for different languages behave similarly, with respect to such things as:
Integer.valueOf(tokenKind)
, which is dubious (this is fixed on the 7.0.x branch)Describe the solution you'd like
Use something similar to #2388 to generate CPD regression tests. Just dump the list of tokens, with their boundaries and text content. This would be super easy to setup and to monitor for regressions.
Possible format:
public class Foo { // CPD-ON }
L1 [public] PUBLIC,1:1,1:6 [ ] WHITESPACE,1:7,1:7 [class] CLASS,1:8,1:12 [ ] WHITESPACE,1:13,1:13 [{] LBRACE,1:14,1:14 [ ] WHITESPACE,1:15,1:15 [// CPD-ON] COMMENT,1:16,1:24 [\n] WHITESPACE,1:25,1:25 L2 [}] RBRACE,2:1,2:1 [\n] WHITESPACE,2:2,2:2
This format would be
[$image] $tokenKindName,$beginLine:$beginCol,$endLine:$endCol
with appropriate escaping of the image, and split by line.
Describe alternatives you've considered
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4