Conversation
|
Note Currently processing new changes in this PR. This may take a few minutes, please wait... ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughRefactors Git plumbing instantiation to a factory, adds Bun-aware blob write path, introduces a multi-step release verification script and tests, adds property-based tests and helpers, bumps version to 5.3.3, and updates docs, CI/Docker, and contribution/release workflows. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant CLI as release:verify<br/>(scripts/release/verify.js)
participant Steps as RELEASE_STEPS<br/>(metadata & commands)
participant Runner as Shell Runner
participant Tools as Lint/Test/Pack
participant Summary as Markdown Summary
User->>CLI: pnpm release:verify
CLI->>CLI: read version from package.json
CLI->>Steps: iterate steps (lint, unit, docker, integration, pack, publish)
loop for each step
CLI->>Runner: execute step.command (via defaultRunner)
Runner->>Tools: spawn/exec command
Tools-->>Runner: stdout/stderr + exit code
Runner->>CLI: buildStepResult (include test count if vitest)
CLI->>Steps: collect step result
end
CLI->>Summary: renderMarkdownSummary(version, results, totalTests)
Summary-->>User: print markdown summary
alt all pass
CLI->>User: exit 0
else any fail
CLI->>User: throw ReleaseVerifyError + exit 1
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai review please 🥕 |
|
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
Dockerfile (1)
1-42:⚠️ Potential issue | 🟠 MajorRun test images as a non-root user.
All stages currently execute as root. Add a non-root user and switch before
CMDto reduce container breakout blast radius.🔐 Suggested hardening patch
FROM ubuntu:24.04 AS ubuntu-base ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update \ && apt-get install -y --no-install-recommends ca-certificates git \ && rm -rf /var/lib/apt/lists/* +RUN groupadd --system app && useradd --system --gid app --create-home --shell /usr/sbin/nologin app # --- Node --- FROM ubuntu-base AS node COPY --from=node-runtime /usr/local/ /usr/local/ RUN npm install -g pnpm@10 WORKDIR /app COPY package.json pnpm-lock.yaml ./ RUN pnpm install --frozen-lockfile COPY . . +RUN chown -R app:app /app ENV GIT_STUNTS_DOCKER=1 +USER app CMD ["pnpm", "vitest", "run", "test/unit"] # --- Bun --- FROM ubuntu-base AS bun COPY --from=bun-runtime /usr/local/bin/bun /usr/local/bin/bun COPY --from=bun-runtime /usr/local/bin/bunx /usr/local/bin/bunx WORKDIR /app COPY package.json ./ RUN bun install COPY . . +RUN chown -R app:app /app ENV GIT_STUNTS_DOCKER=1 +USER app CMD ["bunx", "vitest", "run", "test/unit"] # --- Deno --- FROM ubuntu-base AS deno COPY --from=deno-runtime /usr/bin/deno /usr/local/bin/deno WORKDIR /app COPY package.json ./ RUN deno install --allow-scripts || true COPY . . RUN deno install --allow-scripts +RUN chown -R app:app /app ENV GIT_STUNTS_DOCKER=1 +USER app CMD ["deno", "run", "-A", "npm:vitest", "run", "test/unit"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dockerfile` around lines 1 - 42, Add a non-root user and switch to it in each stage (node, bun, deno) before CMD: create a user (e.g., gitstunts) and group, set HOME, chown /app and any copied runtime binaries (e.g., /usr/local, /usr/local/bin/bun, /usr/local/bin/bunx, /usr/local/bin/deno) as needed, then add a USER gitstunts line in the node, bun and deno stages so the final CMD (["pnpm"..."vitest"], ["bunx"..."vitest"], ["deno"..."vitest"]) runs unprivileged; ensure any install steps that require root stay before the USER switch and preserve file permissions for that non-root user.
🧹 Nitpick comments (1)
test/unit/scripts/release-verify.test.js (1)
73-85: Consider simplifying the redundant error capture.The test calls
runReleaseVerifytwice with the same failing runner: once at line 76 (assertion only) and again at line 81 (to capture the error). Sincevi.fn()accumulates call counts, the assertion at line 84 (toHaveBeenCalledTimes(6)) reflects both invocations (3 steps each).This works but is subtle. Consider refactoring to a single call:
♻️ Suggested simplification
it('stops on the first failure and exposes a partial summary', async () => { const runner = makeFailingRunner('unit-bun'); - await expect(runReleaseVerify({ runner, logger: QUIET_LOGGER })).rejects.toMatchObject({ - name: 'ReleaseVerifyError', - step: expect.objectContaining({ id: 'unit-bun', passed: false }), - }); - - const failure = await runReleaseVerify({ runner, logger: QUIET_LOGGER }).catch((error) => error); + const failure = await runReleaseVerify({ runner, logger: QUIET_LOGGER }).catch((error) => error); expect(failure).toBeInstanceOf(ReleaseVerifyError); + expect(failure.name).toBe('ReleaseVerifyError'); + expect(failure.step).toMatchObject({ id: 'unit-bun', passed: false }); expect(failure.summary).toContain('| Unit Tests (Bun) | FAIL | 5 |'); - expect(runner).toHaveBeenCalledTimes(6); + expect(runner).toHaveBeenCalledTimes(3); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/scripts/release-verify.test.js` around lines 73 - 85, The test currently calls runReleaseVerify twice which doubles runner invocations; change it to a single invocation that both asserts the rejection and captures the error for further assertions: call runReleaseVerify({ runner, logger: QUIET_LOGGER }) once (using either await expect(...).rejects.toMatchObject(...) and then await the same call inside a try/catch to capture the error, or perform a single try/catch to capture the thrown ReleaseVerifyError), then assert on failure.summary and runner call count; update references to runReleaseVerify, makeFailingRunner, ReleaseVerifyError, QUIET_LOGGER and runner accordingly so the runner invocation count reflects only the single test run.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/release/verify.js`:
- Around line 188-200: Wrap the call to runner(step, { cwd }) in a try/catch so
any thrown exception becomes a step failure instead of escaping: catch the
error, set an outcome object describing the failure (include the thrown
error/message), then call buildStepResult(step, outcome) and push that result
onto results; after that reuse the existing logic (totalObservedTests,
renderMarkdownSummary) and throw a ReleaseVerifyError with the structured
summary if result.passed is false. Reference runner, buildStepResult,
totalObservedTests, renderMarkdownSummary, ReleaseVerifyError, step, results,
and outcome when making the change.
---
Outside diff comments:
In `@Dockerfile`:
- Around line 1-42: Add a non-root user and switch to it in each stage (node,
bun, deno) before CMD: create a user (e.g., gitstunts) and group, set HOME,
chown /app and any copied runtime binaries (e.g., /usr/local,
/usr/local/bin/bun, /usr/local/bin/bunx, /usr/local/bin/deno) as needed, then
add a USER gitstunts line in the node, bun and deno stages so the final CMD
(["pnpm"..."vitest"], ["bunx"..."vitest"], ["deno"..."vitest"]) runs
unprivileged; ensure any install steps that require root stay before the USER
switch and preserve file permissions for that non-root user.
---
Nitpick comments:
In `@test/unit/scripts/release-verify.test.js`:
- Around line 73-85: The test currently calls runReleaseVerify twice which
doubles runner invocations; change it to a single invocation that both asserts
the rejection and captures the error for further assertions: call
runReleaseVerify({ runner, logger: QUIET_LOGGER }) once (using either await
expect(...).rejects.toMatchObject(...) and then await the same call inside a
try/catch to capture the error, or perform a single try/catch to capture the
thrown ReleaseVerifyError), then assert on failure.summary and runner call
count; update references to runReleaseVerify, makeFailingRunner,
ReleaseVerifyError, QUIET_LOGGER and runner accordingly so the runner invocation
count reflects only the single test run.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1b78e1fa-b201-46de-a152-c8006ff10cdf
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (25)
.github/CODEOWNERSCHANGELOG.mdCONTRIBUTING.mdDockerfileREADME.mdROADMAP.mdSTATUS.mdbin/git-cas.jsdocs/RELEASE.mdjsr.jsonpackage.jsonscripts/release/verify.jssrc/infrastructure/adapters/GitPersistenceAdapter.jssrc/infrastructure/createGitPlumbing.jstest/CONVENTIONS.mdtest/helpers/property.jstest/integration/round-trip.test.jstest/integration/vault-cli.test.jstest/integration/vault.test.jstest/unit/domain/services/CasService.envelope.property.test.jstest/unit/domain/services/rotateVaultPassphrase.test.jstest/unit/facade/ContentAddressableStore.rotation.test.jstest/unit/infrastructure/adapters/GitPersistenceAdapter.writeBlob.test.jstest/unit/infrastructure/createGitPlumbing.test.jstest/unit/scripts/release-verify.test.js
|
@coderabbitai review please 🥕 |
|
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
test/unit/scripts/release-verify.test.js (1)
99-106: Avoid hardcoded total test count to reduce maintenance churn.
expect(report.totalTests).toBe(30)couples this test to the current number oftestCountsteps. Derive expected total fromRELEASE_STEPSto keep intent stable as steps evolve.♻️ Suggested refactor
it('runs the release steps in order and aggregates test counts', async () => { const runner = makeSuccessRunner(); const report = await runReleaseVerify({ runner, logger: QUIET_LOGGER }); + const expectedTotal = RELEASE_STEPS.filter((step) => step.testCount).length * 5; expect(runner).toHaveBeenCalledTimes(RELEASE_STEPS.length); - expect(report.totalTests).toBe(30); + expect(report.totalTests).toBe(expectedTotal); expect(report.results.every((result) => result.passed)).toBe(true); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/scripts/release-verify.test.js` around lines 99 - 106, Replace the hardcoded expected totalTests value with a computed sum derived from the RELEASE_STEPS constant: compute the expected total by reducing RELEASE_STEPS over each step's testCount and assert report.totalTests equals that computed value so the test adapts when RELEASE_STEPS changes; update the assertion in the test that calls runReleaseVerify (in release-verify.test.js) that currently checks expect(report.totalTests).toBe(30) to use the computed sum via RELEASE_STEPS.reduce((acc, step) => acc + step.testCount, 0).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/release/verify.js`:
- Around line 149-156: In buildStepResult normalize outcome.signal to a single
value (e.g., const signal = outcome.signal ?? null) and use that normalized
signal when computing passed and when setting the signal property, so that
undefined signals are treated as null and a step with code === 0 and no signal
is correctly marked passed; update references in the returned object to use this
normalized signal instead of outcome.signal directly.
---
Nitpick comments:
In `@test/unit/scripts/release-verify.test.js`:
- Around line 99-106: Replace the hardcoded expected totalTests value with a
computed sum derived from the RELEASE_STEPS constant: compute the expected total
by reducing RELEASE_STEPS over each step's testCount and assert
report.totalTests equals that computed value so the test adapts when
RELEASE_STEPS changes; update the assertion in the test that calls
runReleaseVerify (in release-verify.test.js) that currently checks
expect(report.totalTests).toBe(30) to use the computed sum via
RELEASE_STEPS.reduce((acc, step) => acc + step.testCount, 0).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 561a4cc1-79dc-43d3-8769-1dcfaec029aa
📒 Files selected for processing (5)
CHANGELOG.mdDockerfilescripts/release/verify.jssrc/infrastructure/createGitPlumbing.jstest/unit/scripts/release-verify.test.js
🚧 Files skipped from review as they are similar to previous changes (1)
- src/infrastructure/createGitPlumbing.js
Summary
v5.3.3M17 Ledger closeout lineCODEOWNERS, release runbook, expanded test conventions, andpnpm release:verifyDetails
package.jsonandjsr.jsonto5.3.3CHANGELOG.md,STATUS.md, andROADMAP.mdsov5.3.2is the latest shipped release andv5.3.3is the in-flight linev5.3.2until release finalization, per the release plan.github/CODEOWNERSwith repo-wide ownership for@git-stuntsdocs/RELEASE.mdand wire it fromCONTRIBUTING.mdscripts/release/verify.jsplus unit coverage for summary generation and failure propagationfast-checkenvelope property tests plus shared property helperscreateGitPlumbing()so Bun uses the stable Node-backed plumbing runner pathGitPersistenceAdapter.writeBlob()to hash temp files under Bun instead of streaming large buffers togit hash-object --stdinubuntu:24.04, copying runtime binaries from the official upstream images rather than inheriting Debian-based runtime images directlyValidation
pnpm release:verifydocker compose run --build --rm test-node npx vitest run test/integrationpnpm release:verifypassed end to end with:npm pack --dry-runnpx jsr publish --dry-run --allow-dirtyLocal Docker Cleanup
45.76GBimages +36.46GBbuild cache to roughly3.37GBimages +809MBbuild cache after rebuildSummary by CodeRabbit
New Features
Bug Fixes
Documentation
Chores
Tests