On 3/4/2013 5:24 PM, Barry Warsaw wrote: > What I'm looking for is something that automated tools can use to easily > discover how to run a package's tests. I want it to be dead simple for > developers of a package to declare how their tests are to be run, and what I am writing a package that has tests for each module (which I so far run individually for each module) using a custom test framework. I am planning to add a function to the package to run all of them. Should I call it 'testall', 'test_all', 'runtests', or something else? I really do not care. It would be used like this. import xxx; xxx.testall() Of course, this would not work with the stdlib since /lib is not a package that can be imported. I could put the same code in the top level of a module, to be run when imported (but that would not work with re-imports), or put the function in my test module. I am willing to adjust to a standard when there is one. What I do suggest is that package developers should only have to provide one standard entry point that hides all package-specific details. I presume the side-effect spec would be error messages to sdterr. Any return requirements should be a simple as possible, as in all pass True/False, or (number run, number fail) by whatever counting method the package/test framework uses. (Note: my framework does not count tests, as I only care about failure messages, but testall could count modules tested and those with a failure.) > extra dependencies they might need. It seems like PEP 426 only addresses the > latter. Maybe that's fine and a different PEP is needed to describe automated > test discover, but I still think it's an important use case. New PEP. -- Terry Jan Reedy
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4