Skip to content

Conversation

@mseong6251
Copy link
Contributor

Added details on coverage baseline usage and its impact on test analysis.

Description

What did you change?

Reasons

Why did you make these changes?

Content Checklist

Please follow our style when contributing to CircleCI docs. Our style guide is here: https://circleci.com/docs/style/style-guide-overview.

Please take a moment to check through the following items when submitting your PR (this is just a guide so will not be relevant for all PRs):

  • Break up walls of text by adding paragraph breaks.
  • Consider if the content could benefit from more structure, such as lists or tables, to make it easier to consume.
  • Keep the title between 20 and 70 characters.
  • Consider whether the content would benefit from more subsections (h2-h6 headings) to make it easier to consume.
  • Check all headings h1-h6 are in sentence case (only first letter is capitalized).
  • Include relevant backlinks to other CircleCI docs/pages.

Added details on coverage baseline usage and its impact on test analysis.
@mseong6251 mseong6251 requested review from a team as code owners November 4, 2025 20:57
Added ideal use cases and current constraints for adaptive testing.
* Projects with comprehensive test coverage - The more thorough your tests, the more precisely adaptive testing can identify which tests are impacted by changes
* Test frameworks with built-in coverage support (Jest, pytest, Go test, Vitest) where generating coverage reports is straightforward

*Why coverage matters*: In codebases with sparse test coverage, adaptive testing cannot accurately determine which tests cover changed code. This causes the system to default to running all tests, negating the benefits of intelligent test selection.
Copy link
Member

@gordonsyme gordonsyme Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about

This causes the system to run more tests, reducing the benefits of intelligent test selection.

|.circleci/example-baseline.info
|Path to a baseline coverage file to subtract from test coverage during analysis. +
Use this to exclude shared setup or initialization code from test impact data. +
The baseline file should be in the same format as your test coverage output (e.g., LCOV format for `<< outputs.lcov >>`).
Copy link
Member

@gordonsyme gordonsyme Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

~~ Not strictly true, the baseline can be in any supported coverage format and we'll deal with it. There's no reason not to use the same format as test coverage output though.


=== Can I run analysis on branches other than main?

Yes! The branch behavior is fully customizable through your CircleCI configuration. While analysis typically runs on `main` by default, you can configure it to run on:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about:

While analysis runs on main by default, you can configure it to run on:

. Any specific branch (for example, `develop` or `staging`).
. Multiple branches simultaneously.
. Feature branches if needed for testing.
. Scheduled pipelines independent of branch.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

~~ Maybe . Scheduled pipelines is enough?

* Vitest

The key requirement is that your test runner can generate coverage data in a parsable format (typically LCOV or similar).
The key requirement is that your test runner can generate coverage data in a parsable format (typically LCOV or similar).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now we can only read LCOV and Go's "legacy coverage" format.

Comment on lines 24 to 27
* Distributed test architectures where tests run against external services, separate containers, or multiple isolated contexts
* Limited coverage tooling for test frameworks that don't provide native instrumentation or coverage reporting
* Complex test configurations with non-standard test discovery, custom test runners, or unconventional project structures
* End-to-end tests that span multiple repositories or services, making it difficult to map code changes to specific tests
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we'll want to word these as things you test suite is constrained to do, rather than constrained not to do, e.g.:

=== Current Constraints
* Generating code coverage data is essential for determining how tests are related to code. If tests are run in a way that makes generating and accessing code coverage data tricky then adaptive testing may not be a good fit.
* Adaptive testing needs to be configured with commands to discover all available tests and run a subset of those tests. If you cannot run commands to discover tests and run a subset of tests on the CLI then adaptive testing may not be a good fit.
* Adaptive testing works best when testing a single deployable unit. A monorepo which performs integration tests across many packages at once may not be a good fit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants