-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Enhance documentation for coverage baseline in tests #9731
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -11,6 +11,22 @@ NOTE: This page is currently in development and will be updated as the feature i | |
|
|
||
| Use adaptive testing to run only tests that are impacted by code changes and evenly distribute tests across parallel execution nodes. Adaptive testing reduces test execution time while maintaining test confidence. | ||
|
|
||
| == Where Adaptive Testing works well: | ||
|
|
||
| === Ideal use cases | ||
| * Unit and integration tests that exercise code within the same repository | ||
| * Projects with comprehensive test coverage - The more thorough your tests, the more precisely adaptive testing can identify which tests are impacted by changes | ||
| * Test frameworks with built-in coverage support (Jest, pytest, Go test, Vitest) where generating coverage reports is straightforward | ||
|
|
||
| *Why coverage matters*: In codebases with sparse test coverage, adaptive testing cannot accurately determine which tests cover changed code. This causes the system to default to running all tests, negating the benefits of intelligent test selection. | ||
|
|
||
| === Current Friction points | ||
| * Cases where it is difficult to instrument and get coverage from the code you want to test | ||
| * Limited coverage tooling for test frameworks that don't provide native instrumentation or coverage reporting | ||
| * Complex test configurations | ||
| * End-to-end tests that span multiple repositories or services, making it difficult to map code changes to specific tests | ||
|
|
||
|
|
||
| == Key benefits: | ||
|
|
||
| * Faster CI/CD pipelines through intelligent test selection. | ||
|
|
@@ -490,6 +506,12 @@ Any remaining tests will be analysed the next time test analysis is run. | |
| |true | ||
| |Whether the tests should be distributed across a shared queue and fetched across multiple dynamic batches. + | ||
| If a test runner has slow start up time per batch, disabling this can speed up tests. | ||
|
|
||
| | `coverage-baseline` | ||
| |.circleci/example-baseline.info | ||
| |Path to a baseline coverage file to subtract from test coverage during analysis. + | ||
| Use this to exclude shared setup or initialization code from test impact data. + | ||
| The baseline file should be in the same format as your test coverage output (e.g., LCOV format for `<< outputs.lcov >>`). | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ~~ Not strictly true, the baseline can be in any supported coverage format and we'll deal with it. There's no reason not to use the same format as test coverage output though. |
||
| |=== | ||
|
|
||
| The following flags are available to be defined on the `circleci run testsuite` command. | ||
|
|
@@ -770,6 +792,37 @@ Yes! The branch behavior is fully customizable through your CircleCI configurati | |
|
|
||
| See Scenario 3 in the "Flag Usage Scenarios" section for examples of customizing branch behavior. | ||
|
|
||
| === Can I run analysis on branches other than main? | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ~ Might be worth scoping this to analysis and selection, or having a separate point that confirms test selection can be customised the same way.
or, to be more clear on the two phases mentioned above:
|
||
|
|
||
| Yes! The branch behavior is fully customizable through your CircleCI configuration. While analysis typically runs on `main` by default, you can configure it to run on: | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How about:
|
||
|
|
||
| . Any specific branch (for example, `develop` or `staging`). | ||
| . Multiple branches simultaneously. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This isn't something we want to encourage, running analysis on separate branches simultaneously will result in two sources of truth contending with each other. The current limitations are that each testsuite should have a single source of truth, usually the base branch of commits ( |
||
| . Feature branches if needed for testing. | ||
| . Scheduled pipelines independent of branch. | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ~~ Maybe |
||
|
|
||
| See Scenario 3 in the "Flag Usage Scenarios" section for examples of customizing branch behavior. | ||
|
|
||
| === Why are there so many files impact a test? | ||
|
|
||
| If you see many files impacting each test during analysis (e.g., "...found 150 files impacting test..."), this may be caused by shared setup code like global imports or framework initialization being included in coverage. | ||
|
|
||
| *Solution: Use a coverage baseline to exclude setup code* | ||
|
|
||
| . Create a minimal test that only does imports/setup (no test logic) | ||
| . Generate coverage from that test and save it to `.circleci/example-baseline.info` (you can name the file however you'd like) | ||
| . Reference it in your test suite: | ||
|
|
||
| [source,yaml] | ||
| ---- | ||
| # .circleci/test-suites.yml | ||
| options: | ||
| adaptive-testing: true | ||
| coverage-baseline: .circleci/example-baseline.info | ||
| ---- | ||
|
|
||
| The coverage data in the baseline file will be subtracted from each test's coverage during analysis. Rerun analysis and you should see fewer impacting files per test. Note that the baseline file should be in the same format as your test coverage output (e.g., LCOV format for << outputs.lcov >>). | ||
|
|
||
| === What test frameworks are supported? | ||
|
|
||
| Adaptive testing is runner-agnostic. We provide default configurations for the following test frameworks: | ||
|
|
@@ -782,4 +835,4 @@ Adaptive testing is runner-agnostic. We provide default configurations for the f | |
| * Cypress (E2E testing) | ||
| * Vitest | ||
|
|
||
| The key requirement is that your test runner can generate coverage data in a parsable format (typically LCOV or similar). | ||
| The key requirement is that your test runner can generate coverage data in a parsable format (typically LCOV or similar). | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Right now we can only read LCOV and Go's "legacy coverage" format. |
||
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about