Allow disabling chart datasets: backend#44769
Conversation
Adopt the same pattern BlobAND uses: reslice into bounded variables, modernize the loop with `for i := range n`, and add the gosec nolint that the linter requires even with the slicing in place.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds per-dataset scoping to chart data collection and integrates scrub operations. The cron now builds a CollectScopeFn from AppConfig and team HistoricalData flags and calls chartSvc.CollectDatasets with that resolver. Dataset and datastore interfaces were extended to accept disabledFleetIDs. New datastore methods support batched deletion and masked scrubbing. Worker jobs ChartScrubGlobal and ChartScrubFleet, job-enqueue/dedup checks, and enqueueing from app/team historical_data flips were added; scrub invocations are logged and non-fatal on error. Possibly related PRs
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (1)
server/fleet/historical_data_test.go (1)
84-126: ⚡ Quick winAdd a fleet test for the vulnerabilities disable transition.
Current fleet tests only assert the
uptimepath. Please also coverVulnerabilities: true -> falseso the per-fleet enqueue contract is verified for both supported datasets in this mode.Proposed test addition
func TestEnqueueHistoricalDataScrubs_Fleet(t *testing.T) { + t.Run("flips true→false on vulnerabilities: enqueues fleet scrub", func(t *testing.T) { + enq := &fakeJobEnqueuer{} + teamID := uint(7) + err := EnqueueHistoricalDataScrubs(t.Context(), enq, + HistoricalDataSettings{Uptime: true, Vulnerabilities: true}, + HistoricalDataSettings{Uptime: true, Vulnerabilities: false}, + &teamID, + ) + require.NoError(t, err) + require.Len(t, enq.jobs, 1) + assert.Equal(t, "chart_scrub_dataset_fleet", enq.jobs[0].Name) + var args chartScrubFleetArgs + require.NoError(t, json.Unmarshal(*enq.jobs[0].Args, &args)) + assert.Equal(t, "vulnerabilities", args.Dataset) + assert.Equal(t, []uint{7}, args.FleetIDs) + })🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@server/fleet/historical_data_test.go` around lines 84 - 126, Add a new subtest inside TestEnqueueHistoricalDataScrubs_Fleet that mirrors the existing uptime "flips true→false" case but flips Vulnerabilities from true→false: create a fakeJobEnqueuer, set teamID := uint(7), call EnqueueHistoricalDataScrubs with HistoricalDataSettings{Vulnerabilities:true,...} → {Vulnerabilities:false,...} and require one job enqueued; unmarshal into chartScrubFleetArgs and assert args.Dataset is "vulnerabilities" and args.FleetIDs equals []uint{7}. Use the same helper types (fakeJobEnqueuer, chartScrubFleetArgs) and test patterns as the uptime case so the per-fleet enqueue contract is covered for vulnerabilities.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@cmd/fleet/cron.go`:
- Around line 1576-1588: When ds.AppConfig(ctx) or ds.ListTeams(ctx, ...) fail,
do not fall back to unscoped collection by calling chartSvc.CollectDatasets(ctx,
time.Now(), nil); instead propagate the error (or skip the tick) so we avoid
unscoped dataset collection. Change the two error branches that currently call
chartSvc.CollectDatasets(...) to log the error as they do and then return the
error (e.g., return nil, err or return err depending on the surrounding function
signature) rather than calling chartSvc.CollectDatasets; update callers if
necessary to handle the propagated error. Ensure you modify the branches that
reference ds.AppConfig, ds.ListTeams and chartSvc.CollectDatasets accordingly.
In `@ee/server/service/teams.go`:
- Around line 425-436: Currently the code calls
fleet.EnqueueHistoricalDataScrubs after SaveTeam and returns an error on
failure, causing a partial-success; instead persist the "scrub intent"
atomically with the team save or fall back to durable write-on-failure so the
operation never surface-fails after commit. Modify the flow around SaveTeam and
EnqueueHistoricalDataScrubs: either (A) add/use a datastore method (e.g.,
SaveTeamWithScrubIntent or PersistHistoricalScrubIntent) that writes the updated
team and a scrub-intent/outbox row in one DB transaction so the worker can pick
it up, or (B) if enqueue fails, do not return an error — write a durable intent
record via svc.ds (e.g., a new CreateHistoricalScrubIntent or
UpsertTeamScrubIntent using team.ID and team.Config.Features.HistoricalData) and
log the enqueue failure, then continue executing the remaining post-save side
effects so state is consistent and retryable by a background worker.
In `@server/chart/internal/mysql/data.go`:
- Around line 506-512: The SELECT used for paging scrub rows is using
ds.reader(ctx) (reads from replica) which can miss recent primary rows; change
the DB handle to the writer connection (use ds.writer(ctx) in place of
ds.reader(ctx)) in the SelectContext call that fetches id, host_bitmap for the
scrub loop so the query reads from primary and avoids replica lag leaving behind
disabled historical data.
- Around line 455-456: The scrub path is reading fleet membership from a replica
via sqlx.SelectContext using ds.reader(ctx), which can return stale data; change
the DB handle to the primary by calling ds.writer(ctx) in the SelectContext call
(replace ds.reader(ctx) with ds.writer(ctx) in the sqlx.SelectContext invocation
that selects host IDs in fleets) so the scrub job reads current team membership.
In `@server/fleet/historical_data.go`:
- Around line 121-145: The job payload currently uses the config key strings
like "vulnerabilities" (from the `changes` loop over `dataset`), but the scrub
worker expects the internal chart dataset ID ("cve"); update the code that
builds `argsJSON` in the `for _, c := range changes` loop (where
`chartScrubDatasetGlobalJobName`/`chartScrubDatasetFleetJobName` and
`chartScrubGlobalArgs`/`chartScrubFleetArgs` are used) to translate config keys
to the internal dataset ID (e.g., map "vulnerabilities" -> "cve") and pass that
internal ID into the `Dataset` field (or use the existing chart dataset constant
if available) so scrub jobs target the correct dataset.
In `@server/service/appconfig.go`:
- Around line 958-970: After SaveAppConfig commits, do not abort the request on
EnqueueHistoricalDataScrubs failure; change the behavior in the caller around
SaveAppConfig/EnqueueHistoricalDataScrubs so that errors from
fleet.EnqueueHistoricalDataScrubs(ctx, svc.ds,
oldAppConfig.Features.HistoricalData, appConfig.Features.HistoricalData, nil)
are not returned as a hard error—either persist scrub intents together with the
config (transactional outbox) or catch the error, log it with context (including
oldAppConfig and appConfig IDs) and enqueue a durable retry (e.g., write a
persistent "pending scrub" record via svc.ds or schedule a background retry
task) so downstream post-save work still runs and the enqueue is retried
server-side until successful.
In `@server/worker/chart_scrub.go`:
- Around line 81-87: The handler currently treats an empty args.FleetIDs as
success by returning nil; change this to return a non-nil error so malformed
scrub jobs fail and are observable. In the len(args.FleetIDs) == 0 branch (the
check for empty fleet IDs), replace the silent return nil with returning an
error (e.g., errors.New or fmt.Errorf) that includes the dataset and that
fleet_ids was empty, and keep the existing c.Log.WarnContext call (or change to
c.Log.ErrorContext) so the error is both logged and returned from the function.
---
Nitpick comments:
In `@server/fleet/historical_data_test.go`:
- Around line 84-126: Add a new subtest inside
TestEnqueueHistoricalDataScrubs_Fleet that mirrors the existing uptime "flips
true→false" case but flips Vulnerabilities from true→false: create a
fakeJobEnqueuer, set teamID := uint(7), call EnqueueHistoricalDataScrubs with
HistoricalDataSettings{Vulnerabilities:true,...} → {Vulnerabilities:false,...}
and require one job enqueued; unmarshal into chartScrubFleetArgs and assert
args.Dataset is "vulnerabilities" and args.FleetIDs equals []uint{7}. Use the
same helper types (fakeJobEnqueuer, chartScrubFleetArgs) and test patterns as
the uptime case so the per-fleet enqueue contract is covered for
vulnerabilities.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 75fb660b-ee25-4940-bc54-cef984039d7e
⛔ Files ignored due to path filters (4)
openspec/changes/chart-disabling-collection-scrub/design.mdis excluded by!**/*.mdopenspec/changes/chart-disabling-collection-scrub/proposal.mdis excluded by!**/*.mdopenspec/changes/chart-disabling-collection-scrub/specs/chart-historical-data-collection/spec.mdis excluded by!**/*.mdopenspec/changes/chart-disabling-collection-scrub/tasks.mdis excluded by!**/*.md
📒 Files selected for processing (22)
cmd/fleet/cron.gocmd/fleet/cron_test.gocmd/fleet/serve.goee/server/service/teams.goopenspec/changes/chart-disabling-collection-scrub/.openspec.yamlserver/chart/api/chart.goserver/chart/api/service.goserver/chart/blob.goserver/chart/blob_test.goserver/chart/datasets.goserver/chart/internal/mysql/charts.goserver/chart/internal/mysql/data.goserver/chart/internal/mysql/data_test.goserver/chart/internal/service/service.goserver/chart/internal/service/service_test.goserver/chart/internal/testutils/testutils.goserver/chart/internal/types/chart.goserver/fleet/historical_data.goserver/fleet/historical_data_test.goserver/service/appconfig.goserver/worker/chart_scrub.goserver/worker/chart_scrub_test.go
| chartScrubGlobal := &worker.ChartScrubGlobal{ | ||
| ChartService: chartSvc, | ||
| Log: logger, | ||
| } | ||
| chartScrubFleet := &worker.ChartScrubFleet{ | ||
| ChartService: chartSvc, | ||
| Log: logger, | ||
| } | ||
| w.Register(jira, zendesk, macosSetupAsst, dbMigrate, vppVerify, softwareWorker, chartScrubGlobal, chartScrubFleet) |
There was a problem hiding this comment.
Adding these to the worker schedule since they use the jobs table and run only when needed.
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #44769 +/- ##
==========================================
+ Coverage 66.68% 66.72% +0.03%
==========================================
Files 2664 2668 +4
Lines 214605 215921 +1316
Branches 9876 9876
==========================================
+ Hits 143106 144069 +963
- Misses 58478 58735 +257
- Partials 13021 13117 +96
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
🧹 Nitpick comments (1)
server/chart/internal/mysql/data.go (1)
421-440: 💤 Low valueMirror
CleanupSCDData's loop pattern: explicit ctx check + earlier termination.Two small consistency gaps with the sibling
CleanupSCDData(lines 390–416) on the same table:
- No
ctx.Err()check at the top of the loop.ExecContextwill surface cancellation eventually, but the explicit check makes cancellation responsive between batches and matches the pattern in the same file.- Termination on
n == 0costs an extra emptyDELETEround-trip whenever the final batch happened to be exactlybatchSize.CleanupSCDDatausesn < int64(scdCleanupBatch)for the same loop shape.♻️ Proposed alignment with
CleanupSCDDatafunc (ds *Datastore) DeleteAllForDataset(ctx context.Context, dataset string, batchSize int) error { if batchSize <= 0 { batchSize = 5000 } for { + if err := ctx.Err(); err != nil { + return ctxerr.Wrap(ctx, err, "delete SCD rows for dataset") + } res, err := ds.writer(ctx).ExecContext(ctx, `DELETE FROM host_scd_data WHERE dataset = ? LIMIT ?`, dataset, batchSize) if err != nil { return ctxerr.Wrap(ctx, err, "delete SCD rows for dataset") } n, err := res.RowsAffected() if err != nil { return ctxerr.Wrap(ctx, err, "rows affected for dataset delete") } - if n == 0 { + if n < int64(batchSize) { return nil } } }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@server/chart/internal/mysql/data.go` around lines 421 - 440, DeleteAllForDataset's loop should mirror CleanupSCDData: add an explicit ctx cancellation check at the top of the for loop (if ctx.Err() != nil { return ctx.Err() }) so cancellation is responsive between batches, and change the loop termination condition from if n == 0 to if n < int64(batchSize) (compare the int64 result from res.RowsAffected() against the batchSize) to avoid an extra empty DELETE when the last batch exactly equals batchSize; keep existing error wrapping logic around ExecContext and RowsAffected.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In `@server/chart/internal/mysql/data.go`:
- Around line 421-440: DeleteAllForDataset's loop should mirror CleanupSCDData:
add an explicit ctx cancellation check at the top of the for loop (if ctx.Err()
!= nil { return ctx.Err() }) so cancellation is responsive between batches, and
change the loop termination condition from if n == 0 to if n < int64(batchSize)
(compare the int64 result from res.RowsAffected() against the batchSize) to
avoid an extra empty DELETE when the last batch exactly equals batchSize; keep
existing error wrapping logic around ExecContext and RowsAffected.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3bdaed7b-bf0e-4fd0-81b3-9b8b5451d9f5
⛔ Files ignored due to path filters (3)
openspec/changes/chart-disabling-collection-scrub/design.mdis excluded by!**/*.mdopenspec/changes/chart-disabling-collection-scrub/specs/chart-historical-data-collection/spec.mdis excluded by!**/*.mdopenspec/changes/chart-disabling-collection-scrub/tasks.mdis excluded by!**/*.md
📒 Files selected for processing (9)
cmd/fleet/cron.goee/server/service/teams.goserver/chart/internal/mysql/data.goserver/datastore/mysql/jobs.goserver/fleet/datastore.goserver/fleet/historical_data.goserver/fleet/historical_data_test.goserver/mock/datastore_mock.goserver/service/appconfig.go
👮 Files not reviewed due to content moderation or server errors (5)
- server/datastore/mysql/jobs.go
- server/fleet/historical_data.go
- server/service/appconfig.go
- ee/server/service/teams.go
- cmd/fleet/cron.go
|
@claude review once |
There was a problem hiding this comment.
Pull request overview
Implements backend enforcement for disabling chart datasets by (1) scoping/halting collection during the cron tick based on global + per-team features.historical_data settings and (2) enqueueing asynchronous scrub jobs to delete or bit-scrub already-collected host_scd_data when a dataset is disabled.
Changes:
- Extend chart collection plumbing to support per-dataset scoping (
skip+disabledFleetIDs) and push fleet exclusions down into the dataset SQL queries. - Add chart scrub job types (global delete + fleet-scoped bitmap ANDNOT) and register them with the worker schedule.
- Enqueue scrub jobs after config/team commits when historical data settings flip from enabled → disabled, with queued-job dedup by
(name,args).
Reviewed changes
Copilot reviewed 30 out of 30 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| server/worker/chart_scrub.go | Adds global/fleet scrub worker job handlers + enqueue helpers. |
| server/worker/chart_scrub_test.go | Unit tests for scrub job handlers and wire-contract constants/JSON. |
| server/service/appconfig.go | Enqueues global scrub jobs after app config disables a dataset. |
| server/mock/datastore_mock.go | Adds mock hook for HasQueuedJobWithArgs. |
| server/fleet/historical_data.go | Adds scrub enqueue helper + dedup gate via HasQueuedJobWithArgs. |
| server/fleet/historical_data_test.go | Tests scrub enqueue behavior (flip detection, dedup, error propagation). |
| server/fleet/datastore.go | Extends datastore interface with HasQueuedJobWithArgs. |
| server/datastore/mysql/jobs.go | Implements HasQueuedJobWithArgs using JSON equality + primary reads. |
| server/chart/internal/types/chart.go | Extends chart datastore interface for scrubbing + scoped queries. |
| server/chart/internal/testutils/testutils.go | Adds helpers to insert/read host_scd_data rows with explicit bitmaps. |
| server/chart/internal/service/service.go | Wires scope resolver into CollectDatasets and adds scrub service methods. |
| server/chart/internal/service/service_test.go | Updates mocks/tests for new collection + scrub APIs and scope forwarding. |
| server/chart/internal/mysql/data.go | Implements dataset deletes + fleet scrub bitmap application (paged + chunked UPDATE). |
| server/chart/internal/mysql/data_test.go | Integration-ish tests for bitmap scrubbing behavior. |
| server/chart/internal/mysql/charts.go | Pushes disabled-fleet exclusions down into uptime/CVE collection SQL. |
| server/chart/datasets.go | Updates dataset Collect methods to accept/forward disabledFleetIDs. |
| server/chart/blob.go | Adds BlobANDNOT bitmap helper. |
| server/chart/blob_test.go | Unit tests for BlobANDNOT. |
| server/chart/api/service.go | Extends chart service API for scoped collection + scrub methods. |
| server/chart/api/chart.go | Extends Dataset/DatasetStore interfaces to support scoped collection. |
| openspec/changes/chart-disabling-collection-scrub/tasks.md | Tracks implementation tasks for this change. |
| openspec/changes/chart-disabling-collection-scrub/specs/chart-historical-data-collection/spec.md | Adds detailed requirements/spec for collection gating + scrub behavior. |
| openspec/changes/chart-disabling-collection-scrub/proposal.md | Proposal describing motivation/approach. |
| openspec/changes/chart-disabling-collection-scrub/design.md | Design document for scoping + scrubbing. |
| openspec/changes/chart-disabling-collection-scrub/.openspec.yaml | OpenSpec metadata for the change. |
| ee/server/service/teams.go | Enqueues per-team scrub jobs after team disables a dataset (PATCH + GitOps path). |
| cmd/fleetctl/fleetctl/testing_utils/testing_utils.go | Stubs HasQueuedJobWithArgs for GitOps test server setup. |
| cmd/fleet/serve.go | Passes chartSvc into worker integration schedule registration. |
| cmd/fleet/cron.go | Registers scrub workers; builds chart scope resolver each tick before collection. |
| cmd/fleet/cron_test.go | Unit tests for buildChartScopeResolver. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@ee/server/service/teams.go`:
- Around line 425-439: The scrub enqueue must run immediately after SaveTeam
commits to avoid missing jobs if later post-commit handlers fail; move the
EnqueueHistoricalDataScrubs(...) call so it executes right after the successful
SaveTeam call (before calling ModifyTeam, ApplyEnrollSecrets,
OnHistoricalDataChanged, or other post-save side effects). Update the code paths
in ModifyTeam and editTeamFromSpec to remove or not duplicate the enqueue there,
and keep the same error-handling (log-and-continue) behavior around
EnqueueHistoricalDataScrubs to avoid interrupting subsequent post-save logic.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 24df7a52-0d79-469f-b031-e3025a41d644
⛔ Files ignored due to path filters (2)
openspec/changes/chart-disabling-collection-scrub/design.mdis excluded by!**/*.mdopenspec/changes/chart-disabling-collection-scrub/tasks.mdis excluded by!**/*.md
📒 Files selected for processing (5)
cmd/fleetctl/fleetctl/testing_utils/testing_utils.goee/server/service/teams.goserver/datastore/mysql/jobs.goserver/fleet/historical_data.goserver/fleet/historical_data_test.go
|
Local testing complete! |
Cherry-pick of #44769 into the RC branch.
Related issue: For #44077
Details
This PR implements enforcement of the "disable dataset" feature.
When a dataset is disabled globally, we:
Collectmethod for that dataset is not called in the cron job)When a dataset is disabled for one or more fleets, we:
Collectmethod. Each dataset is responsible for filtering out hosts in the most efficient way possibleChecklist for submitter
If some of the following don't apply, delete the relevant line.
Changes file added for user-visible changes in
changes/,orbit/changes/oree/fleetd-chrome/changes.See Changes files for more information.
n/a, unreleased
Input data is properly validated,
SELECT *is avoided, SQL injection is prevented (using placeholders for values in statements), JS inline code is prevented especially for url redirects, and untrusted data interpolated into shell scripts/commands is validated against shell metacharacters.Timeouts are implemented and retries are limited to avoid infinite loops
Testing
Added/updated automated tests
Where appropriate, automated tests simulate multiple hosts and test for host isolation (updates to one hosts's records do not affect another)
QA'd all new/changed functionality manually
Prerequisites / Test Setup
host_scd_datafordataset='cve'will have non-empty bitmaps)features.historical_data.uptimeandfeatures.historical_data.vulnerabilitiesstart astrue; same for every teamhost_scd_datafor bothuptimeandcveSELECT dataset, COUNT(*) FROM host_scd_data GROUP BY dataset;1. Cron Skips Globally-Disabled Datasets
1.1 Global disable of
uptimePATCH /api/v1/fleet/configwithfeatures.historical_data.uptime = falsedisabled_historical_datasetforuptime(existing behavior)dataset='uptime':SELECT MAX(valid_from) FROM host_scd_data WHERE dataset='uptime';should not advance after the disable
cverows on the same tick (per-dataset isolation)historical_data.uptime = trueuptimerows1.2 Global disable of
vulnerabilitiesfeatures.historical_data.vulnerabilitiescvewrites stop,uptimecontinues1.3 Both disabled globally
2. Per-Fleet Disable — Cron Filters at SQL
2.1 Single team disabled for one dataset
features.historical_data.uptime = falsedisabled_historical_datasetactivity emitted for T1H_T1); confirm its bit is NOT set in anyuptimerow written after the disable by filtering the chart to that hostH_T2); confirm its bit IS still set in the same rows (T2 is not disabled)H_none); confirm its bit IS still set (no-team hosts follow the global value)2.2 Same fleet, different dataset
cverows on subsequent ticks (per-dataset isolation)2.3 All teams disabled, global on, no-team hosts
3. Global Scrub — DELETE
3.1 Successful global scrub
SELECT COUNT(*) FROM host_scd_data WHERE dataset='uptime';(should be > 5000 to exercise the loop; if not, manually
insert filler rows or run multiple cron ticks)
SELECT COUNT(*) FROM host_scd_data WHERE dataset='uptime';4. Per-Fleet Scrub — ANDNOT
4.1 Single-fleet scrub clears bits
S)host_scd_datarow fordataset='uptime'has bits set at positions inSby filtering the chart to those hostsdataset='uptime'now has NO bits set at any position inS. Spot-check by filtering the chart to those hostsdataset='cve'(different dataset) are untouched4.2 Multi-fleet scrub via GitOps batch
dataset='cve'5. Activity Feed Cross-Check
disabled_historical_datasetactivity (existing behavior, unchanged)
ID and name
causes no scrub (no
host_scd_datadata change observedafter the cron tick)
emitted (out of scope for v1)
enabled_historical_datasetactivitiesand do NOT emit any scrub-related activity
6. Regression Spot Checks
same data as before this change (no behavior change in the
"all on" case)
fleetctl apply) is benign:applying the unchanged config produces no scrub jobs and no
activities
historical_dataomitted from team specsdefaults to
true(per the gitops-api change) and does nottrigger spurious scrubs
host_scd_datatablehas no
dataset='cve'rows; the chart UI for "vulnerablehosts over time" shows an empty/zero state without errors
Summary by CodeRabbit