Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
55 changes: 54 additions & 1 deletion docs/guides/modules/test/pages/adaptive-testing.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,22 @@ NOTE: This page is currently in development and will be updated as the feature i

Use adaptive testing to run only tests that are impacted by code changes and evenly distribute tests across parallel execution nodes. Adaptive testing reduces test execution time while maintaining test confidence.

== Where Adaptive Testing works well:

=== Ideal use cases
* Unit and integration tests that exercise code within the same repository
* Projects with comprehensive test coverage - The more thorough your tests, the more precisely adaptive testing can identify which tests are impacted by changes
* Test frameworks with built-in coverage support (Jest, pytest, Go test, Vitest) where generating coverage reports is straightforward

*Why coverage matters*: In codebases with sparse test coverage, adaptive testing cannot accurately determine which tests cover changed code. This causes the system to default to running all tests, negating the benefits of intelligent test selection.
Copy link
Member

@gordonsyme gordonsyme Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about

This causes the system to run more tests, reducing the benefits of intelligent test selection.


=== Current Friction points
* Cases where it is difficult to instrument and get coverage from the code you want to test
* Limited coverage tooling for test frameworks that don't provide native instrumentation or coverage reporting
* Complex test configurations
* End-to-end tests that span multiple repositories or services, making it difficult to map code changes to specific tests


== Key benefits:

* Faster CI/CD pipelines through intelligent test selection.
Expand Down Expand Up @@ -490,6 +506,12 @@ Any remaining tests will be analysed the next time test analysis is run.
|true
|Whether the tests should be distributed across a shared queue and fetched across multiple dynamic batches. +
If a test runner has slow start up time per batch, disabling this can speed up tests.

| `coverage-baseline`
|.circleci/example-baseline.info
|Path to a baseline coverage file to subtract from test coverage during analysis. +
Use this to exclude shared setup or initialization code from test impact data. +
The baseline file should be in the same format as your test coverage output (e.g., LCOV format for `<< outputs.lcov >>`).
Copy link
Member

@gordonsyme gordonsyme Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

~~ Not strictly true, the baseline can be in any supported coverage format and we'll deal with it. There's no reason not to use the same format as test coverage output though.

|===

The following flags are available to be defined on the `circleci run testsuite` command.
Expand Down Expand Up @@ -770,6 +792,37 @@ Yes! The branch behavior is fully customizable through your CircleCI configurati

See Scenario 3 in the "Flag Usage Scenarios" section for examples of customizing branch behavior.

=== Can I run analysis on branches other than main?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

~ Might be worth scoping this to analysis and selection, or having a separate point that confirms test selection can be customised the same way.

Can I run test impact analysis on any branch?

or, to be more clear on the two phases mentioned above:

Can I run test analysis and selection on any branch?


Yes! The branch behavior is fully customizable through your CircleCI configuration. While analysis typically runs on `main` by default, you can configure it to run on:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about:

While analysis runs on main by default, you can configure it to run on:


. Any specific branch (for example, `develop` or `staging`).
. Multiple branches simultaneously.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't something we want to encourage, running analysis on separate branches simultaneously will result in two sources of truth contending with each other.

The current limitations are that each testsuite should have a single source of truth, usually the base branch of commits (main), so that the selection will always be a close comparison to the commit diff.

. Feature branches if needed for testing.
. Scheduled pipelines independent of branch.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

~~ Maybe . Scheduled pipelines is enough?


See Scenario 3 in the "Flag Usage Scenarios" section for examples of customizing branch behavior.

=== Why are there so many files impact a test?

If you see many files impacting each test during analysis (e.g., "...found 150 files impacting test..."), this may be caused by shared setup code like global imports or framework initialization being included in coverage.

*Solution: Use a coverage baseline to exclude setup code*

. Create a minimal test that only does imports/setup (no test logic)
. Generate coverage from that test and save it to `.circleci/example-baseline.info` (you can name the file however you'd like)
. Reference it in your test suite:

[source,yaml]
----
# .circleci/test-suites.yml
options:
adaptive-testing: true
coverage-baseline: .circleci/example-baseline.info
----

The coverage data in the baseline file will be subtracted from each test's coverage during analysis. Rerun analysis and you should see fewer impacting files per test. Note that the baseline file should be in the same format as your test coverage output (e.g., LCOV format for << outputs.lcov >>).

=== What test frameworks are supported?

Adaptive testing is runner-agnostic. We provide default configurations for the following test frameworks:
Expand All @@ -782,4 +835,4 @@ Adaptive testing is runner-agnostic. We provide default configurations for the f
* Cypress (E2E testing)
* Vitest

The key requirement is that your test runner can generate coverage data in a parsable format (typically LCOV or similar).
The key requirement is that your test runner can generate coverage data in a parsable format (typically LCOV or similar).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now we can only read LCOV and Go's "legacy coverage" format.