Modern Python CI with Coverage in 2025
November 03, 2025 | View Comments
Warning
This blog post has been written by an LLM for the most part. The image above has been generated by a soulless ghoul called Gemini 2.5. I hope you still find the blog useful!
I've recently revisited building a GitHub CI pipeline for a Python project that includes coverage reporting, with only free-as-in-beer tooling and I've landed in a pretty nice place.
The most interesting parts of the toolchain in this set up are:
- py-cov-action: Coverage reporting without external services like Codecov or Coveralls
- pytest-xdist: Parallel test execution using all available CPU cores
- uv: Package management that's 10-100× faster than pip
The integration of these tools requires attention to some nuances that aren't very well documented elsewhere, and so while each individual tool is documented very well, the interplay can get a bit tricky, so that's what we're focusing on in this blog.
Tutorial Contents
Our toolchain
py-cov-action: GitHub-native coverage
py-cov-action eliminates external coverage services. It posts neat PR comments, generates badges, and adds line-by-line annotations, all within GitHub.
The badge lives in a git branch rather than an external service at github.com/USER/REPO/raw/python-coverage-comment-action-data/BADGE.svg.
Basic configuration in .github/workflows/ci.yml:
- uses: py-cov-action/python-coverage-comment-action@v3 with: GITHUB_TOKEN: ${{ github.token }} ANNOTATE_MISSING_LINES: true
py-cov-action uses a two-workflow pattern which is useful when you expect PRs from users that don't have write access to the repository. For simpler setups without fork PRs, you can you use a single workflow, with py-cov-action as part of your main CI workflow.
The two-workflow pattern for fork PRs
Fork PRs run with read-only permissions. A single workflow can't post comments from a fork. The solution is splitting into two workflows:
- ci.yml: Runs tests, computes coverage, saves comment to artifact
- post-coverage-comment.yml: Posts the saved comment (runs in trusted context)
The second workflow uses the workflow_run trigger:
on: workflow_run: workflows: ["CI"] types: [completed]
The py-cov-action docs have more details on different variants.
pytest-xdist: Parallel testing by default
Modern CPUs have 4-16 cores. Running tests serially wastes most of that capacity. pytest-xdist fixes this with one flag:
pytest -n auto --cov
The -n auto flag spawns one worker per CPU core. For a typical test suite, this can reduce runtime drastically. But beware of global state and resources. This works best for unit tests and tests that use resources in an isolated way. (Which is arguably the only kinds of good tests, except for maybe the odd E2E test.)
Warning
Don't use coverage run -m pytest -n auto. This bypasses pytest-cov's xdist integration and will report 0% coverage. Use pytest --cov instead when running parallel tests.
uv: Fast package management
uv is from the team behind Ruff. It replaces tools such as the beloved pip, pip-tools, virtualenv, and pyenv. The main advantage is speed: 10-100× faster than pip in typical use.
In GitHub Actions:
- uses: astral-sh/setup-uv@v6 - run: uv python install 3.14 - run: uv sync --dev
That's it. There's no actions/setup-python and no cache configuration. The uv.lock file ensures deterministic builds across all environments.
For local development, your Makefile might look like:
install: uv sync --dev test: uv run pytest tests/ -v -n auto --cov
Notice how we prefix our pytest command with uv run, which automatically uses the project's virtual environment. No seperate source bin/activate is needed.
Six critical gotchas
These silent failures can potentially waste you a couple of hours. Each one will make your CI look successful while coverage is actually broken.
Gotcha #1: Using coverage run -m pytest with xdist
Symptom: Coverage shows 0% or drastically low percentages.
Root cause: coverage run -m pytest -n auto bypasses pytest-cov's xdist support. Coverage.py only sees the main process, not the workers.
The fix: Always use pytest -n auto --cov
Gotcha #2: Missing relative_files = true
Symptom: Coverage works locally but shows 0% in CI.
Root cause: Coverage.py uses absolute paths by default (/home/runner/work/myproject/myproject/file.py). These don't match GitHub's file structure.
The fix in pyproject.toml:
[tool.coverage.run] relative_files = true
Without this, py-cov-action can't map coverage data to source files.
Gotcha #4: Fork PRs can't post comments
Symptom: Coverage comments appear for same-repo PRs but not forks.
Root cause: Fork PRs run with read-only GITHUB_TOKEN.
The fix: Use the two-workflow pattern described earlier.
Gotcha #5: Missing pytest-cov plugin
Symptom: pytest: error: unrecognized arguments: --cov
Root cause: pytest-cov not installed.
The fix in pyproject.toml:
[tool.uv] dev-dependencies = [ "pytest-cov>=6.0.0", ]
Gotcha #6: E2E tests with subprocesses contribute 0% coverage
Symptom: E2E tests with Playwright or Selenium pass but show 0% coverage for server code.
Root cause: Tests spawn a subprocess. Coverage.py in the parent process can't measure the child.
The problem in practice:
# tests/test_frontend_e2e.py @pytest.fixture def live_server(): process = subprocess.Popen( ["uv", "run", "mypackage", "serve"], # Separate process! stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) # ... wait for server to start ... yield server_url process.terminate() # Server shuts down def test_frontend_loads(live_server): # This test passes but contributes 0% coverage response = requests.get(live_server) assert response.status_code == 200
The fix Set COVERAGE_PROCESS_START environment variable and invoke via coverage run -m:
@pytest.fixture def live_server(): env = os.environ.copy() env["COVERAGE_PROCESS_START"] = "pyproject.toml" # Copy pytest-cov environment variables for key, value in os.environ.items(): if key.startswith("COV_"): env[key] = value # Invoke via coverage run process = subprocess.Popen( ["uv", "run", "coverage", "run", "-m", "mypackage", "serve"], env=env, stdout=subprocess.PIPE, stderr=subprocess.PIPE, )
However, you should take a step back and think if you want E2E tests to play into coverage. Some might say that this will lead to bloated coverage numbers, but it's also nice to look at E2E test coverage in isolation. However, beware that any frontend code written in a language other than Python will not be tracked.
Some reasons why you might not to include coverage for E2E tests are:
- Unit tests already cover most backend logic directly
- Integration tests already hit the same API endpoints
- Coverage.py only measures Python code, not JavaScript
- E2E tests primarily verify frontend/backend integration
Complete working example
Here's a production-ready setup you can copy and adapt. These files work together to provide parallel testing, coverage reporting, and fork-safe PR comments.
Note
This example uses the two-workflow pattern for fork PR support described earlier.
.github/workflows/ci.yml
name: CI on: pull_request: push: branches: [main, master] permissions: contents: write pull-requests: write checks: write actions: read jobs: test: name: Test & Coverage runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v4 - name: Install uv uses: astral-sh/setup-uv@v6 - name: Install Python 3.14 run: uv python install 3.14 - name: Sync dependencies run: uv sync --dev - name: Install Playwright browsers (if needed) run: uv run playwright install chromium # Remove this step if you don't have E2E tests - name: Run tests with coverage run: | uv run pytest -n auto -v \ --cov --cov-report=xml --cov-report=html --cov-report=term \ --junitxml=test-results.xml - name: Upload coverage artifacts uses: actions/upload-artifact@v4 if: always() with: name: coverage-report path: | .coverage coverage.xml htmlcov/ include-hidden-files: true # Critical for .coverage file - name: Publish test results uses: dorny/test-reporter@v1 if: always() with: name: Test Results path: test-results.xml reporter: java-junit - name: Coverage analysis id: cov uses: py-cov-action/python-coverage-comment-action@v3 with: GITHUB_TOKEN: ${{ github.token }} ANNOTATE_MISSING_LINES: true ANNOTATION_TYPE: warning - name: Store PR comment if: steps.cov.outputs.COMMENT_FILE_WRITTEN == 'true' uses: actions/upload-artifact@v4 with: name: python-coverage-comment-action path: python-coverage-comment-action.txt
.github/workflows/post-coverage-comment.yml
name: Post coverage comment on: workflow_run: workflows: ["CI"] types: [completed] permissions: pull-requests: write contents: write actions: read jobs: post: name: Publish PR coverage comment runs-on: ubuntu-latest if: > github.event.workflow_run.event == 'pull_request' && github.event.workflow_run.conclusion == 'success' steps: - name: Post comment uses: py-cov-action/python-coverage-comment-action@v3 with: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_PR_RUN_ID: ${{ github.event.workflow_run.id }}
pyproject.toml configuration
[project] name = "mypackage" version = "0.1" requires-python = ">=3.14" [tool.uv] package = true dev-dependencies = [ "pytest>=8.0.0", "pytest-xdist>=3.6.0", "pytest-cov>=6.0.0", "mypy>=1.13.0", "ruff>=0.8.0", ] [tool.coverage.run] source = ["mypackage"] omit = [ "tests/*", "*/__init__.py", "*/conftest.py", ] relative_files = true # Required for CI! [tool.coverage.report] exclude_lines = [ "pragma: no cover", "def __repr__", "def __str__", "raise AssertionError", "raise NotImplementedError", "if __name__ == .__main__.:", "if TYPE_CHECKING:", ] show_missing = true precision = 1 [tool.pytest.ini_options] testpaths = ["tests"] addopts = "-v"
Verifying your setup
After pushing these files, here's how to verify everything works:
Check relative paths
Run tests locally and inspect coverage.xml:
grep 'filename=' coverage.xml | head -3
You should see relative paths like filename="mypackage/cli.py", not absolute paths like /home/runner/work/....
Verify artifact upload
In GitHub Actions:
- Go to your workflow run
- Scroll to "Artifacts"
- Download coverage-report
- Verify .coverage file exists inside
Test fork PR comments
- Fork your repo
- Make a change and submit a PR
- Wait for CI to complete
- Check that "Post coverage comment" workflow runs
- Verify comment appears on the PR
What success looks like
Locally:
$ uv run pytest -n auto --cov --cov-report=term ============================= test session starts ============================== ... ======================= 231 passed, 2 skipped in 61.14s ======================== Name Stmts Miss Cover ------------------------------------------- mypackage/api.py 139 23 83.5% mypackage/cli.py 397 221 44.3% ... TOTAL 1427 488 65.8%
In GitHub Actions, you'll see:
- Test Results check with pass/fail counts
- Coverage comment on PR with diff
- Line-by-line annotations on changed files
- Badge in python-coverage-comment-action-data branch
Migration notes
From Codecov/Coveralls
Replace your codecov step:
# Old - uses: codecov/codecov-action@v3 with: token: ${{ secrets.CODECOV_TOKEN }} # New - uses: py-cov-action/python-coverage-comment-action@v3 with: GITHUB_TOKEN: ${{ github.token }}
No external token needed. Badge URL changes from codecov.io/gh/USER/REPO/badge.svg to github.com/USER/REPO/raw/python-coverage-comment-action-data/BADGE.svg.
From pip to uv
Replace in your workflow:
# Old - uses: actions/setup-python@v4 with: python-version: "3.14" - run: pip install -r requirements.txt # New - uses: astral-sh/setup-uv@v6 - run: uv python install 3.14 - run: uv sync --dev
Create pyproject.toml with your dependencies and run uv lock to generate the lockfile.
Next steps
- Copy the workflow files above
- Add relative_files = true to your pyproject.toml
- Push to GitHub and watch CI run
- Add the badge to your README from the data branch
For slow tests, mark them with @pytest.mark.slow and run them separately. For coverage gaps, focus on unit tests for business logic.
Resources
That's a wrap! Modern Python CI in 2025: fast, parallel, and entirely within GitHub. Have fun and leave your thoughts in the comments below.