📘Labels = Test Name
You can assume that a "label" is a "test name".
📘What is BASE and HEAD?
HEAD is the current commit, for which the tests to run will be decided (the HEAD of your feature branch, for example)
BASE is the remote commit that we are comparing to. We have historical coverage info about it.
Label analysis is the process through which Codecov takes the set of tests in your test suite (tests in HEAD) and derives a subset of them that will properly cover the diff between two given commits (HEAD vs BASE).
To do that it breaks the testing process into 2 parts:
What it does with this set of labels to run it's up to you. You can get them reported (with --dry-run
) or executed by the Codecov CLI runner (more info below).
Notice that the Codecov CLI needs to be able to collect your tests. To do that you need to setup your environment to a point that test collection can be performed. You might have to add config to the runner that does that too (more info below)
Codecov uses (1) the set of labels collected in the checked out HEAD code, (2) Static Analysis information already uploaded to Codecov for the BASE and HEAD commits and (3) the git diff between HEAD and BASE to calculate the subset of labels that need to be executed.
From the information above Codecov extracts 4 different lists, that are returned to the CLI at the end of stage 2:
The subset that necessarily needs to be run on the current run is the union of the first three subsets (excluding the labels already recorded that are not in the diff).
set(absent_labels + present_diff_labels + global_level_labels)
Notice that by changing the BASE-HEAD pair the set of present_diff_labels
will also change.
Label Analysis is the process that collects a set of test names (labels) from the test suite, and given a BASE commit to compare against, gets the subset of labels that actually need to be run in order to fully test the diff.
Usage: codecovcli label-analysis [OPTIONS]
Options:
--token TEXT The static analysis token (NOT the same token
as upload) [required]
--head-sha TEXT Commit SHA (with 40 chars) [required]
--base-sha TEXT Commit SHA (with 40 chars) [required]
--runner-name, --runner TEXT Runner to use
--max-wait-time INTEGER Max time (in seconds) to wait for the label
analysis result before falling back to running
all tests. Default is to wait forever.
--dry-run Userful during setup. This will run the label
analysis, but will print the result to stdout
and terminate instead of calling the
runner.process_labelanalysis_result
-h, --help Show this message and exit.
Above are the list of options for the label-analysis
command.
CODECOV_STATIC_TOKEN
for the value --token
if one is not specified.head-sha
and the base-sha
must have static analysis information already uploadedRunners are the plugins that collect and execute tests in your test suite. To understand how to use and configure them let's start by checking the available ones, and then go over how to create your own runner.
To select a runner when running label analysis use the --runner-name
option in the CLI command.
Codecov CLI ships with 2 runners available: PythonStandardRunner
and DANRunner
.
This runner is for Python users that run tests with pytest. Under the hood it runs pytest to collect and execute tests. This runner should fit almost all users running python.
Configuration options for the python standard runner.
cli:
runners:
pytest:
coverage_root: "./"
collect_tests_options:
- "--ignore=path/to/ignore"
- "path/to/tests"
execute_tests_options:
- "cov-report=xml"
- "--verbose"
python_path: "/path/to/interpreter/python"
📘Prefer —option=value format
When adding configuration for the collection phase always prefer the
--option=value
in the same string on the list.
--cov=<coverage_root>
argument passed to pytest when running collected tests.--cov=/path
here, use the coverage_root
config option.python
.In the collection phase, the python runner runs the command equivalent to the command below. Notice that if you don't provide any collect_tests_options
configuration it will try to collect the entire test suite.
python -m pytest \
-q \
--collect-only \
[option-in-collect_tests_options]
In the test execution phase the subset of labels is fed into the python runner and that set of labels is executed. The equivalent command is below. You can see the progress of test execution in your CI as it goes.
python -m pytest \
--cov=[coverage-root] \
--cov-context=test \
[options-in-execute_tests_options] \
[set-of-labels-to-execute]
DAN stands for Do Anything Now. This runner is a “nuclear option” for the user to take full control of the code that's executed in the collection and execution phases. It does nothing by itself, only runs the commands that it is provided with.
Internally, it uses subprocess.run
to execute the command. The output is captured, then stdout
for the subprocess is decoded and that is the return of your command to the CLI.
🚧With great powers comes great responsibility
There are no safety checks for the provided commands. It's your responsibility to make sure they are safe and work properly with the label analysis process.
cli:
runners:
dan:
collect_tests_command:
- "./my_command"
- "--option=value"
process_labelanalysis_result_command: "./other_command --option value"
Directly provide the commands that will be executed in the collection and test execution phases. You need to provide both commands.
Commands can be provided as a list, as shown in the first example, or as a string directly, as shown in the second example. Prefer the list option.
The DANRunner will run the command provided in collect_tests_command
. The output of this command should be 1 test label per line (e.g. separated by \n
). As shown below.
test_label_1
test_label_2
test_label_3
...
The DANRunner will run the command provided in process_labelanalysis_result_command
. It will receive as the last argument a string representation of the JSON result of label-analysis. It should run the tests. We recommend running the tests in the subset set(absent_labels) | set(present_diff_labels) | set(global_level_labels)
.
# Last argument given to the command is a stringifyied version of the dictionary
{
"present_report_labels": ["label_1", "label_2", "label_3", "label_4"],
"absent_labels": ["label_new"],
"present_diff_labels": ["label_1", "label_2"],
"global_level_labels": ["label_3"],
}
Custom runners allow you to take full control of how ATS interacts with your code. By creating a runner script yourself and using it with the Codecov CLI you can own the behavior of your runner and make sure it only does what you want it to do.
To create a custom runner you need to create a class that adheres to LabelAnalysisRunnerInterface
. This essentially means it needs to implement 2 functions:
def collect_tests(self) -> List[str]
- collects a list of test labels. Returns such list.def process_labelanalysis_result(self, result: LabelAnalysisRequestResult)
- handles the label analysis processing result. Usually will execute the tests related to the labels in result
.params
attribute, but it can be None
. Ideally it's where you'll put the config for the class.LabelAnalysisRunnerInterface source code
LabelAnalysisRequestResult source code
To configure your custom runner add the config options to the CLI config file (for example codecov.yml
).
The name of this config key (in the example, "MY_RUNNER") will be the name of your runner. Pass that to the label-analysis
command in the runner option (e.g. --runner MY_RUNNER
).
Then you need to add the path to the module
where MY_RUNNER is defined. Best to put the absolute path to avoid issues. You also need to provide the class
name to be imported.
Optionally you can define params
that will be passed to MY_RUNNER when trying to initialize the class.
cli:
runners:
MY_RUNNER:
module: project.helpers.runner
class: MyRunner
params:
foo: "bar"
This configuration will try to import MyRunner
class from path.to.runner.module
and instantiate it with params {"foo": "bar"}
, which is equivalent to writing
from project.helpers.runner import MyRunner
runner = MyRunner({"foo": "bar"})
Then to use MY_RUNNER you'd call the command such as
codecovcli --codecov-yml-path=codecov.yml label-analysis --runner=MY_RUNNER --base-sha=$BASE_SHA
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4