"Martin v. Löwis" writes: > > If they do fail, they're not "false" positives. If they're "false", > > then the test is broken, no? > > Correct. But they might well be broken, no? I would hope some effort is made that they not be. If they generate a positive, I would expect that the contributor would try to fix that before committing, no? If they discover that it's "false", they fix or remove the test; otherwise they document it. > > So find a way to label them as tests > > added ex-post, with the failures *not* being regressions but rather > > latent bugs newly detected, and (presumably) as "wont-fix". > > No such way exists, Add a documentation file called "README.expected-test-failures". AFAIK documentation is always acceptable, right? Whether that is an acceptable solution to the "latent bug" problem is a different question. I'd rather know that Python has unexpected behavior, and have a sample program (== test) to demonstrate it, than not. YMMV.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4