Neal Norwitz wrote: > [Michael working on cleaning up the unittest module] > > it seems like most of the good ideas have been captured already. I'll > through two more (low priority) ideas out there. > > 1) Randomized test runner/option that runs tests in a random order > (like regrtest.py -r, but for methods) > Should be easy enough. > 2) decorator to verify a test method is supposed to fail > > #2 is useful for getting test cases into the code sooner rather than > later. I'm pretty sure I have a patch that implements this > (http://bugs.python.org/issue1399935). It didn't fit in well with the > old unittest structure, but seems closer to the direction you are > headed. > We normally just prefix the test name with 'DONT' so that it isn't actually run... > One other idea that probably ought not be done just yet: add a way of > failing with the test continuing. We use this at work (not in Python > though) and when used appropriately, it works quite well. It provides > more information about the failure. It looks something like this: > > def testMethod(self): > # setup > self.assertTrue(precondition) > self.expectTrue(value) > self.expectEqual(expected_result, other_value) > > All the expect methods duplicate the assert methods. Asserts can the > test to fail immediately, expects don't fail immediately allowing the > test to continue. All the expect failures are collected and printed > at the end of the method run. I was a little skeptical about assert > vs expect at first, but it has proven useful in the long run. As I > said, I don't think this should be done now, maybe later. > > n > I like this pattern. Michael Foord
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4