It is certainly possible to extend parachute to retest just the tests that failed last time. Since parachute can run against a list of tests, all you need is a function to save a list of the names of the tests that fail. The following might be one way to do that.
(defun collect-test-failure-names (test-results) "This function takes the report output of a parachute test and returns a list of the names of the tests that failed." (when (typep test-results 'parachute:report) (loop for test-result across (results test-results) when (and (typep test-result 'parachute::test-result) (eq (status test-result) :failed)) collect (name (expression test-result)))))
You might also consider whether the helper library Protest would be a good add-on as it has a Parachute module.
Since I questioned what is meant by extensibility at the very beginning of this report, allow me to quote the Parachute documentation:
"Extending Parachute Test and Result Evaluation
"Parachute follows its own evaluation semantics in order to run tests. Primarily this means that most everything goes through one central function called eval-in-context. This functions allows you to customise evaluation based on both what the context is, and what the object being "evaluated" is.
Usually the context is a report object, but other situations might also be conceived. Either way, it is your responsibility to add methods to this function when you add a new result type, some kind of test subclass, or a new report type that you want to customise according to your desired behaviour.
The evaluation of results is decoupled from the context and reports in the sense that their behaviour does not, by default, depend on it. At the most basic, the result class defines a single :around method that takes care of recording the duration of the test evaluation, setting a default status after finishing without errors, and skipping evaluation if the status is already set to something other than :unknown.
Next we have a result object that is interesting for anything that actually produces direct test results– value-result. Upon evaluation, if the value slot is not yet bound, it calls its body function and stores the return value thereof in the value slot.
However, the result type that is actually used for all standard test forms is the comparison-result. This also takes a comparator function and an expected result to compare against upon completion of the test. If the results match, then the test status is set to :passed, otherwise to :failed.
Since Parachute allows for a hierarchy in your tests, there have to be aggregate results as well, and indeed there are. Two of them, actually. First is the base case, namely parent-result which does two things on evaluation: one, it binds *parent*
to itself to allow other results to register themselves upon construction, and two it sets its status to :failed if any of the children have failed.
Finally we have the test-result which takes care of properly evaluating an actual test object. What this means is to evaluate all dependencies before anything else happens, and to check the time limit after everything else has happened. If the time limit has exceeded, set the description accordingly and mark the result as :failed. For its main eval-in-context method however it checks whether any of the dependencies have failed, and if so, mark itself as :skipped. Otherwise it calls eval-in-context on the actual test object.
The default evaluation procedure for a test itself is to simply call all the functions in the tests list in a with-fixtures environment.
And that describes the semantics of default test procedures. Actual test forms like is are created through macros that emit an (eval-in-context context (make-instance 'comparison-result #|…|#)) form. The *context*
object is automatically bound to the context object on call of eval-in-context and thus always refers to the current context object. This allows results to be evaluated even from within opaque parts like user-defined functions.
Report Generation
"It should be possible to get any kind of reporting behaviour you want by adding methods that specialise on your report object to eval-in-context. For the simple case where you want something that prints to the REPL but has a different style than the preset plain report, you can simply subclass that and specialise on the report-on and summarize functions that then produce the output you want.
Since you can control pretty much every aspect of evaluation rather closely, very different behaviours and recovery mechanisms are also possible to achieve. One final aspect to note is result-for-testable, which should return an appropriate result object for the given testable. This should only return fresh result objects if no result is already known for the testable in the given context. The standard tests provide for this, however they only ever return a standard test-result instance. If you need to customise the behaviour of the evaluation for that part, it would be a wise idea to subclass test-result and make sure to return instances thereof from result-for-testable for your report.
Finally it should be noted that if you happen to create new result types that you might want to run using the default reports, you should add methods to format-result that specialise on the keywords :oneline and :extensive for the type. These should return a string containing an appropriate description of the test in one line or extensively, respectively. This will allow you to customise how things look to some degree without having to create a new report object entirely." top
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4