This guide will lead you through the necessary steps of integrating Automated Test Selection into your Python repo. By the end of the guide, you will have a workflow that runs Automated Test Selection in your CI. This guide assumes Github Actions as the CI of choice, but the usage patterns can be reasonably extended to any other CI provider that Codecov supports. Find a full list of supported CI providers here
📘Looking to quickly integrate with Github Actions?
If you're already using GitHub Actions, you can quickly save some time and get setup by adding the Codecov-ATS Github Action step to your workflow. Learn more here.
If you're using a different CI that we don't have an ATS helper for, you can keep reading.
📘This guide has 2 parts
Part 1 will integrate static analysis to your CI. Part 2 will integrate label analysis. You need to complete both in order for a successful Automated Test Selection integration.
You will need a Codecov upload token (aka CODECOV_TOKEN
) and a Static Analysis token (aka CODECOV_STATIC_TOKEN
). You can find both on your repository's configuration page in the Codecov UI, under General.
Copy these codes for later. They will be used to authenticate actions with the Codecov CLI.
Go to the ⚙️ Settings tab in your GitHub repository. On the side panel find Secrets and variables > Actions
.
There you should create 2 repository secrets:
CODECOV_TOKEN
, with the value of Codecov upload tokenCODECOV_STATIC_TOKEN
, with the value of Static Analysis tokenTo use Automated Test Selection you need a flag with the new Carryforward mode "labels". You can reuse any existing flag if you have one already. Read more about Carryforward Flags Here we are creating a "smart-tests" flag with carryforward_mode: "labels"
.
We're also adding some configuration for the CLI that will be used when uploading the coverage data to Codecov.
flag_management:
individual_flags:
- name: smart-tests
carryforward: true
carryforward_mode: "labels"
statuses:
- type: "project"
- type: "patch"
cli:
plugins:
pycoverage:
report_type: "json"
Go into your local copy of your repository and create a new file .github/workflows/codecov-ats.yml
.
We are also setting up Python and checking out the code. Notice the use of fetch-depth: 0
. This option includes your git history in the workflow, that will be used with the label-analysis
step later on.
You can copy the following snippet :
name: Codecov-ATS
on:
push
jobs:
codecov-ats:
runs-on: ubuntu-latest
steps:
- uses: actions/setup-python@v3
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Install Codecov CLI
run: |
python -m pip install --upgrade pip
pip install codecov-cli
- name: Create commit in codecov
run: |
codecovcli create-commit -t ${{ secrets.CODECOV_TOKEN }}
- name: Create commit report in codecov
run: |
codecovcli create-report -t ${{ secrets.CODECOV_TOKEN }}
- name: Static Analysis
run: |
codecovcli static-analysis --token ${{ secrets.CODECOV_STATIC_TOKEN }}
Note -
static analysis
run can take a while as we're uploading a lot of information. Subsequent runs are faster.We recommend to stop here and create a PR with your changes so far, to let the CI run for the current commit. Then branch off of it and follow the next steps. If you choose not to do this, just know that the first run of Automated Test Selection will fail the label-analysis step, but subsequent runs should work correctly.
📘Depends on Part 1
You should only follow the steps below if you successfully completed the Part 1 of the guide.
Now that you are (hopefully) in a fresh commit, let's resume our changes to .github/workflows/codecov-ats.yml
.
Now we'll set up our test environment, run label analysis (which is where Codecov determines what tests need to run) and uploads the results to Codecov
...
# previous step. Upload static analysis information
# current step. Install dependencies, build,run Label Analysis, upload Coverage to Codecov
# Part 2
- name: Install dependencies
run: |
pip install -r requirements.txt
- name: Label Analysis
run: |
BASE_SHA=$(git rev-parse HEAD^)
codecovcli label-analysis --token ${{ secrets.CODECOV_STATIC_TOKEN }} --base-sha=$BASE_SHA
- name: Upload to Codecov
run: |
codecovcli --codecov-yml-path=codecov.yml do-upload \
-t ${{ secrets.CODECOV_TOKEN }} \
--plugin pycoverage \
--plugin compress-pycoverage \
--flag smart-tests
Note -
BASE_SHA
to compare against current commit. If you've followed the guide then the other commit with static analysis should be the parent of your commit (HEAD^
). To get a reference to it we can use git rev-parse HEAD^
.
git merge-base HEAD^ origin/master
as the base commit. This ensures the diff covers the entire feature-branch's changes compared to master.dry run
option in the CLI to output a list of tests. Here's how.
codecov.yml
(We called it smart-tests
)json
coverage reports are supported for Automated Test Selection. The easiest way to do that is to use the pycoverage
plugin on the CLI. The compress-pycoverage
plugin is optional but recommended, as it can greatly reduce the size of the report that will be uploaded to Codecov.If you followed the guide up until this point, our .github/workflows/codecov-ats.yml
should be looking like this:
name: Codecov-ATS
on:
push
jobs:
codecov-ats:
runs-on: ubuntu-latest
steps:
- uses: actions/setup-python@v3
# Part 1
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Install Codecov CLI
run: |
python -m pip install --upgrade pip
pip install codecov-cli
- name: Create commit in codecov
run: |
codecovcli create-commit -t ${{ secrets.CODECOV_TOKEN }}
- name: Create commit report in codecov
run: |
codecovcli create-report -t ${{ secrets.CODECOV_TOKEN }}
- name: Static Analysis
run: |
codecovcli static-analysis --token ${{ secrets.CODECOV_STATIC_TOKEN }}
# Part 2
- name: Install dependencies
run: |
pip install -r requirements.txt
- name: Label Analysis
run: |
BASE_SHA=$(git rev-parse HEAD^)
codecovcli label-analysis --token ${{ secrets.CODECOV_STATIC_TOKEN }} --base-sha=$BASE_SHA
- name: Upload to Codecov
run: |
codecovcli --codecov-yml-path=codecov.yml do-upload \
-t ${{ secrets.CODECOV_TOKEN }} \
--plugin pycoverage \
--plugin compress-pycoverage \
--flag smart-tests
To check that it's really working, you’ll need to see in the logs of the CLI. Look for the summary of informations about tests to run. On the first run with label analysis expect to see that line have only absent_labels
information. This is because Codecov has no coverage info for any tests in your test suite yet.
info - 2023-05-02 17:11:55,620 -- Received information about tests to run --- {"absent_labels": 47, "present_diff_labels": 0, "global_level_labels": 0, "present_report_labels": 0}
Over time, we expect to see more labels in the present_report_labels
, so that only new tests are in absent_labels
, and tests that are affected by your recent changes are in present_diff_labels
. Something similar to the example below.
info - 2023-05-02 17:11:55,620 -- Received information about tests to run --- {"absent_labels": 3, "present_diff_labels": 5, "global_level_labels": 0, "present_report_labels": 47}
fetch-depth:0
when checking out the code?Not really, as long as you know the BASE_SHA
you can checkout just the current commit. If you always use the parent (HEAD^
) commit you can use a fetch-depth
of 2, for example. However when doing git merge-base HEAD^ origin/master
, you will need a depth of 0.
on: push
and not on: pull_request
?You can run Automated Test Selection only on pull requests if you prefer. However, we need static analysis information for all commits. In particular if you were to merge the PR with the workflow into main with it set to only run in pull requests we would not have that information for the merged commit, and the part II (steps 9 - 11) would fail.
Also it's perfectly fine to run this in your main branch.
Updated 9 months ago
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4