feat: 支持自托管的 remote-control-server#214
Conversation
165 tests covering store, auth, event-bus, services, work-dispatch, ws-handler, and middleware modules. Also includes ws-handler fix to replay all events (inbound + outbound) for bridge reconnection. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
35 route integration tests covering v1 sessions, v1 environments, v1 work, v2 code sessions, v2 worker, web auth, web sessions, web control, web environments. Plus 4 disconnect-monitor logic tests. Total: 204 tests, 364 assertions across 9 test files. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
7 additional route tests covering session-ingress HTTP POST, worker events publishing, worker state updates, and no-op endpoints. Total: 211 tests, 374 assertions across 9 test files. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ase tests Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Preview deployment for your docs. Learn more about Mintlify Previews.
💡 Tip: Enable Workflows to automatically generate PRs for you. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a new Remote Control Server package with routes, auth (API keys, JWTs, tokens), in-memory store/event-bus, work dispatch and disconnect monitor, WebSocket/SSE bridge support, web UI and assets, Docker/CI build/publish tooling, extensive tests, and bridge-mode diagnostics/self-hosted overrides. Changes
Sequence Diagram(s)sequenceDiagram
participant Web as Client (Web UI)
participant Bridge as Bridge / Remote Client
participant Server as RCS Server
participant Store as In‑memory Store
participant Bus as EventBus
participant Transport as Transport (SSE/WS)
Web->>Server: POST /v1/sessions (create)
Server->>Store: storeCreateSession()
Store-->>Server: SessionRecord
Server->>Bus: getEventBus(sessionId)
Bus-->>Server: EventBus instance
Server-->>Web: SessionResponse
Bridge->>Server: POST /v1/sessions/:id/events or WS send
Server->>Bus: publish(event)
Bus->>Transport: notify subscribers
Transport-->>Web: SSE/WS delivery (replay + live)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
…f sensitive information' Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
…f sensitive information' Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 5
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (27)
packages/remote-control-server/Dockerfile-18-19 (1)
18-19:⚠️ Potential issue | 🟠 MajorThe
webdirectory requires clarification—there's a mismatch between build intent and execution.The repository contains a
"build:web"script inpackages/remote-control-server/package.json, which signals that web assets should be built. However:
- No build configuration file exists in the web directory (no bunfig.toml, vite.config.js, etc.)
- The CI/CD pipeline does not run
build:webbeforedocker build- The Dockerfile copies unbuilt web source files directly from the host (lines 18-19)
Either:
- Remove the unused
build:webscript if web is meant to serve as static source files, or- Add the build step to the CI/CD pipeline and update the Dockerfile to copy built assets from the builder stage
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/Dockerfile` around lines 18 - 19, The Dockerfile currently copies the unbuilt web sources ("COPY packages/remote-control-server/web ./web") while the package.json contains a "build:web" script—either remove that unused "build:web" script or make the Docker build actually produce and copy built assets: update the multi-stage Dockerfile to run the "build:web" step in the builder stage (invoke the build:web script in the builder's RUN steps) and then change the COPY to pull built output from the builder (e.g., copy from the builder's built web output location instead of the source), and ensure the CI pipeline runs the build step before docker build if you prefer building outside Docker.packages/remote-control-server/web/index.html-140-143 (1)
140-143:⚠️ Potential issue | 🟠 MajorAdd Subresource Integrity (SRI) hashes for CDN scripts.
Loading scripts from CDNs without integrity hashes exposes the application to supply-chain attacks if the CDN is compromised. Add
integrityandcrossoriginattributes to both scripts at lines 141-142.🔒 Proposed fix with SRI hashes
<!-- QR Libraries --> - <script src="https://cdnjs.cloudflare.com/ajax/libs/qrcodejs/1.0.0/qrcode.min.js"></script> - <script src="https://cdn.jsdelivr.net/npm/jsqr@1.4.0/dist/jsQR.js"></script> + <script src="https://cdnjs.cloudflare.com/ajax/libs/qrcodejs/1.0.0/qrcode.min.js" + integrity="sha512-CNgIRecGo7nphbeZ04Sc13ka07paqdeTu0WR1IM4kNcpmBAUSHSQX0FslNhTDadL4O5SAGapGt4FodqL8My0mA==" + crossorigin="anonymous"></script> + <script src="https://cdn.jsdelivr.net/npm/jsqr@1.4.0/dist/jsQR.js" + integrity="sha256-3PRNE1MDyVVrIgLVxpJljsWi7C3Ma+SeXLHvFwJZT8E=" + crossorigin="anonymous"></script>Important: Generate the SRI hashes from the actual CDN files to ensure they match the current versions. You can compute them using:
# For qrcode.min.js curl https://cdnjs.cloudflare.com/ajax/libs/qrcodejs/1.0.0/qrcode.min.js | openssl dgst -sha512 -binary | base64 -A | sed 's/^/sha512-/' # For jsQR.js curl https://cdn.jsdelivr.net/npm/jsqr@1.4.0/dist/jsQR.js | openssl dgst -sha256 -binary | base64 -A | sed 's/^/sha256-/'Or use online SRI generators like jsdelivr SRI hash generator.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/web/index.html` around lines 140 - 143, The two external script tags that load qrcodejs (https://cdnjs.cloudflare.com/ajax/libs/qrcodejs/1.0.0/qrcode.min.js) and jsQR (https://cdn.jsdelivr.net/npm/jsqr@1.4.0/dist/jsQR.js) must include Subresource Integrity and crossorigin attributes; update the <script> elements that reference qrcode.min.js and jsQR.js to add an integrity="..." attribute with the correct SRI hash (compute the hash from the exact CDN files) and add crossorigin="anonymous" to each tag so the browser can verify the files before execution and fetch them correctly.packages/remote-control-server/src/services/disconnect-monitor.ts-5-31 (1)
5-31:⚠️ Potential issue | 🟠 MajorMake the monitor fault-tolerant and stoppable.
A thrown error inside the interval path can break monitoring behavior, and the current API cannot be cleanly stopped on shutdown/restart.
💡 Suggested fix
export function startDisconnectMonitor() { const timeoutMs = config.disconnectTimeout * 1000; - setInterval(() => { - const now = Date.now(); + const timer = setInterval(() => { + try { + const now = Date.now(); - // Check environment heartbeat timeout - const envs = storeListActiveEnvironments(); - for (const env of envs) { - if (env.lastPollAt && now - env.lastPollAt.getTime() > timeoutMs) { - console.log(`[RCS] Environment ${env.id} timed out (no poll for ${Math.round((now - env.lastPollAt.getTime()) / 1000)}s)`); - storeUpdateEnvironment(env.id, { status: "disconnected" }); + // Check environment heartbeat timeout + const envs = storeListActiveEnvironments(); + for (const env of envs) { + if (env.lastPollAt && now - env.lastPollAt.getTime() > timeoutMs) { + console.log(`[RCS] Environment ${env.id} timed out (no poll for ${Math.round((now - env.lastPollAt.getTime()) / 1000)}s)`); + storeUpdateEnvironment(env.id, { status: "disconnected" }); + } } - } - // Check session timeout (2x disconnect timeout with no update) - const sessions = storeListSessions(); - for (const session of sessions) { - if (session.status === "running" || session.status === "idle") { - const elapsed = now - session.updatedAt.getTime(); - if (elapsed > timeoutMs * 2) { - console.log(`[RCS] Session ${session.id} marked inactive (no update for ${Math.round(elapsed / 1000)}s)`); - storeUpdateSession(session.id, { status: "inactive" }); + // Check session timeout (2x disconnect timeout with no update) + const sessions = storeListSessions(); + for (const session of sessions) { + if (session.status === "running" || session.status === "idle") { + const elapsed = now - session.updatedAt.getTime(); + if (elapsed > timeoutMs * 2) { + console.log(`[RCS] Session ${session.id} marked inactive (no update for ${Math.round(elapsed / 1000)}s)`); + storeUpdateSession(session.id, { status: "inactive" }); + } } } + } catch (error) { + console.error("[RCS] Disconnect monitor iteration failed", error); } }, 60_000); // Check every minute + + return () => clearInterval(timer); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/services/disconnect-monitor.ts` around lines 5 - 31, The disconnect monitor must be made fault-tolerant and stoppable: wrap the entire setInterval callback body in a try/catch and log errors so one thrown exception cannot kill the monitor, and change startDisconnectMonitor to store the interval ID in a module-level variable and return (or export) a stopDisconnectMonitor function that calls clearInterval on that ID (or have startDisconnectMonitor return a stopper function); reference the existing startDisconnectMonitor, storeListActiveEnvironments, storeListSessions, storeUpdateEnvironment and storeUpdateSession to locate where to add the try/catch and where to keep/clear the timer; ensure the stop function is idempotent (safe to call multiple times) and that startDisconnectMonitor does not create multiple intervals if already running.packages/remote-control-server/src/routes/v2/worker-events.ts-11-17 (1)
11-17:⚠️ Potential issue | 🟠 MajorValidate request payload shape before publishing events.
This path can throw on invalid bodies (e.g.,
nullor primitives), turning client input errors into server 500s.🛠️ Suggested fix
app.post("/:id/worker/events", acceptCliHeaders, sessionIngressAuth, async (c) => { const sessionId = c.req.param("id"); - const body = await c.req.json(); + let body: unknown; + try { + body = await c.req.json(); + } catch { + return c.json({ status: "error", message: "Invalid JSON body" }, 400); + } - const events = Array.isArray(body) ? body : [body]; + const events = Array.isArray(body) ? body : [body]; const published = []; for (const evt of events) { - const result = publishSessionEvent(sessionId, evt.type || "message", evt, "inbound"); + if (!evt || typeof evt !== "object") { + return c.json({ status: "error", message: "Each event must be an object" }, 400); + } + const eventType = typeof (evt as { type?: unknown }).type === "string" ? (evt as { type?: string }).type! : "message"; + const result = publishSessionEvent(sessionId, eventType, evt, "inbound"); published.push(result); } return c.json({ status: "ok", count: published.length }, 200); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/routes/v2/worker-events.ts` around lines 11 - 17, Request payload is not validated before being used, so null or primitive bodies cause a server error; validate the parsed body from c.req.json() to ensure it's either an object or an array of objects before building the events array, return a 400/invalid request response for bad shapes, and only call publishSessionEvent(sessionId, evt.type || "message", evt, "inbound") for entries that are non-null objects (optionally filtering out invalid items and logging/collecting errors for them) to avoid turning client input errors into 500s.packages/remote-control-server/src/types/api.ts-139-147 (1)
139-147:⚠️ Potential issue | 🟠 MajorThe event sequence field casing is inconsistent with the new web client.
The browser SSE/history consumer dedupes on
seqNum, while this shared API type still exposesseq_num. Standardize one name across the transport layer and exported types before more typed consumers depend on the wrong shape.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/types/api.ts` around lines 139 - 147, The shared API type SessionEventPayload uses snake_case field seq_num but the web client expects seqNum; update the SessionEventPayload interface to use seqNum (number) instead of seq_num and propagate that rename through the transport layer: update any producers (event creation/serialization), consumers (SSE/history handlers), and JSON (de)serialization code to emit/parse seqNum, and adjust tests and type imports that reference seq_num to the new seqNum symbol so the browser SSE/history consumer and other typed consumers share the same shape.packages/remote-control-server/src/routes/web/control.ts-59-60 (1)
59-60:⚠️ Potential issue | 🟠 MajorDon't mark the session
idlebefore the interrupt is actually processed.Publishing the interrupt is asynchronous. If the worker is still running or ignores the event, the server state is now wrong and the UI can show a stopped session that never stopped. Let the worker/status pipeline drive this transition, or introduce a distinct
interruptingstate.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/routes/web/control.ts` around lines 59 - 60, The code currently sets session status to "idle" immediately after calling publishSessionEvent(sessionId, "interrupt", ...) which can misrepresent state if the worker hasn't processed the interrupt; remove the immediate updateSessionStatus(sessionId, "idle") call and instead let the worker/status pipeline set the final state, or change the status to a transient "interrupting" via updateSessionStatus(sessionId, "interrupting") so the UI reflects an in-progress interruption until the worker emits the real outcome; locate the calls to publishSessionEvent and updateSessionStatus in control.ts and apply one of these two approaches.packages/remote-control-server/web/api.js-20-27 (1)
20-27:⚠️ Potential issue | 🟠 MajorDon't send the web credential in the URL.
uuidis the only thing protecting these web routes, and query params leak into browser history, reverse-proxy/access logs, andRefererheaders. Move it to a header or secure cookie instead of appending it to every request URL; the SSE connection should follow the same pattern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/web/api.js` around lines 20 - 27, The api function currently appends the getUuid() value to the request URL (uuid query param) which leaks credentials; change api(method, path, body) to stop adding uuid to the URL and instead send the uuid in a request header (e.g. "X-Auth-UUID") or secure SameSite cookie, remove the sep/url construction that adds uuid, and ensure the server-side auth reads the header/cookie; also update any SSE connection code that currently attaches uuid as a query param to use the same header/cookie mechanism so the credential is never sent in URLs.packages/remote-control-server/src/types/api.ts-24-28 (1)
24-28:⚠️ Potential issue | 🟠 Major
RegisterEnvironmentResponsedoesn't match the route contract.The environment routes and their new tests consume
environment_id, but this exported API type advertisesid. TS callers will compile against the wrong wire shape.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/types/api.ts` around lines 24 - 28, The exported RegisterEnvironmentResponse interface uses the wrong property name: change the shape to match the route contract by renaming id to environment_id (keep secret and status as-is) so consumers receive the correct wire format; update the RegisterEnvironmentResponse interface declaration to include environment_id: string instead of id: string and run TypeScript tests to ensure callers and tests align with the new field name.packages/remote-control-server/src/routes/web/control.ts-30-34 (1)
30-34:⚠️ Potential issue | 🟠 MajorDon't let
/eventsspoof event types or log raw bodies.This handler is documented as user-message ingress, but it trusts
body.typeand logs a preview of the raw body. That lets the browser forge internal event types and leaks prompt/tool content into server logs. ForceeventType = "user"here, and only emit redacted metadata behind an explicit debug flag.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/routes/web/control.ts` around lines 30 - 34, The handler currently trusts body.type and logs the raw request body which enables clients to spoof internal event types and leak sensitive content; change the code to always set eventType = "user" (do not read body.type), call publishSessionEvent(sessionId, "user", body, "outbound"), and replace the console.log lines that print JSON.stringify(body) with a safe, minimal metadata log (e.g., event id/type/direction/subscriber count) or only emit a redacted preview when an explicit debug flag is enabled; update references around publishSessionEvent and getEventBus(sessionId).subscriberCount() to use the sanitized logging and ensure no raw body is written to logs.packages/remote-control-server/src/routes/web/sessions.ts-44-48 (1)
44-48:⚠️ Potential issue | 🟠 Major
/web/sessions/allleaks every session to any UUID holder.
uuidAuthonly proves a caller supplied a UUID, not that it is privileged. Returning all titles/statuses/usernames here breaks the ownership model everywhere else in the web API. Gate this route behind real operator auth, or keep it owner-scoped.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/routes/web/sessions.ts` around lines 44 - 48, The /sessions/all route currently uses uuidAuth and returns listSessionSummaries() which leaks every session to any UUID holder; change this to either require operator-level auth (replace uuidAuth with operatorAuth or check operator role from the auth payload) or make the route owner-scoped by extracting the caller UUID from uuidAuth and returning only sessions owned by that UUID (e.g., call a filtered version of listSessionSummaries(ownerId) or filter the array returned by listSessionSummaries() to match the auth identity). Update the handler for app.get("/sessions/all", ...) and keep the route name or split into two routes (one operator-only and one owner-only) as appropriate.packages/remote-control-server/src/__tests__/routes.test.ts-392-408 (1)
392-408:⚠️ Potential issue | 🟠 MajorThese v2 worker tests are creating the wrong kind of session.
The routes under test are
/v1/code/sessions/:id/..., but these cases seed them with/v1/sessionsids. That means the suite can still pass even if ordinarysession_*ids are accidentally accepted or the realcse_*flow regresses. Create the fixtures through/v1/code/sessions(or a helper that returnsbody.session.id) for all worker/worker-events cases.Also applies to: 839-905
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/__tests__/routes.test.ts` around lines 392 - 408, The tests in packages/remote-control-server/src/__tests__/routes.test.ts are creating sessions via POST /v1/sessions but then exercising /v1/code/sessions/:id routes (e.g., in the "POST /v1/code/sessions/:id/worker/register" test), so replace the session creation call to POST /v1/code/sessions (or use the existing helper that returns body.session.id) and extract the correct id (e.g., const { session: { id } } = await createRes.json() or use helper) so the tests seed real code-session (cse_*) IDs; apply the same fix to the other worker/worker-events tests flagged in the file.packages/remote-control-server/src/index.ts-8-10 (1)
8-10: 🛠️ Refactor suggestion | 🟠 MajorReplace
node:pathandnode:urlwith Bun'simport.meta.dirandnew URL.Lines 8-10 and 37-38 use Node.js path utilities in a Bun-specific server. Use
import.meta.dirfor the current directory andnew URL(..., import.meta.url).pathnameto resolve relative paths instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/index.ts` around lines 8 - 10, Replace Node-specific path/url utilities: remove the imports of dirname, resolve, and fileURLToPath and switch usages to Bun-compatible APIs; where the code currently computes the module dir via dirname(fileURLToPath(import.meta.url)) use import.meta.dir, and where it resolves a relative path using resolve(...) or new URL(...).pathname use new URL(relativePath, import.meta.url).pathname (or path concatenation against import.meta.dir) instead. Update any occurrences in this file that reference dirname, resolve, or fileURLToPath so they use import.meta.dir and new URL(..., import.meta.url).pathname, keeping the same resolved path strings used by functions that load files or assets.packages/remote-control-server/web/sse.js-11-23 (1)
11-23:⚠️ Potential issue | 🟠 Major
fromSeqNumnever reaches the server on initial connection.
connectSSE()accepts a replay offset, but the request URL only includesuuid. The EventSource API does not allow setting an initialLast-Event-IDheader on first connection—the browser only sendsLast-Event-IDon reconnections. As a result, the first connection after loading history sends all events back to the default sequence, forcing unnecessary backlog replay that is then filtered client-side. IncludefromSeqNumin the request URL so the server can start streaming from the correct offset and skip sending unnecessary events.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/web/sse.js` around lines 11 - 23, connectSSE currently never sends the initial fromSeqNum to the server so the first EventSource connection always receives the full backlog; modify the URL construction inside connectSSE (the function and the local variable url where uuid is appended) to also include fromSeqNum as a query parameter (properly encoded, e.g. &fromSeqNum=...) so the server can start streaming from that offset on the initial connection.packages/remote-control-server/src/services/session.ts-53-64 (1)
53-64:⚠️ Potential issue | 🟠 MajorMake session mutations surface "not found" to callers.
updateSessionTitle(),updateSessionStatus(), andarchiveSession()all returnvoidand never re-read the row, so callers have no reliable way to distinguish a successful mutation from a missing session. That ambiguity is already leaking into the HTTP layer as inconsistent update/archive behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/services/session.ts` around lines 53 - 64, The three functions updateSessionTitle, updateSessionStatus, and archiveSession currently fire-and-forget storeUpdateSession and return void, so callers cannot tell if the session existed; change them to surface "not found" by returning the updated session or a boolean/nullable result from storeUpdateSession and propagating that to callers: updateSessionTitle(sessionId, title), updateSessionStatus(sessionId, status), and archiveSession(sessionId) should await or return the result of storeUpdateSession (or perform a read-after-update via getSession/getSessionById) and return null/false or the updated session when the row is missing so the HTTP layer can distinguish success vs not-found. Ensure you update any callers to handle the new return type.packages/remote-control-server/src/transport/event-bus.ts-15-16 (1)
15-16:⚠️ Potential issue | 🟠 MajorBound replay history before it becomes an in-memory leak.
Every publish appends to
this.events, and nothing prunes that array until the session is removed. A busy long-lived self-hosted session will grow memory without limit, and reconnect cost grows with it becausegetEventsSince()always scans the retained history.Also applies to: 35-55
packages/remote-control-server/src/routes/v1/sessions.ts-20-27 (1)
20-27:⚠️ Potential issue | 🟠 MajorDon't return success after work-dispatch bootstrap fails.
By the time this block runs, the session is already persisted. If
createWorkItem()fails because the environment is missing or deregistered, the handler still returns success with a session that can never be picked up by a worker.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/routes/v1/sessions.ts` around lines 20 - 27, The handler currently swallows createWorkItem() errors and still returns success, leaving a persisted session that workers can never pick up; change the behavior so bootstrap failures abort the request: in the block referencing body.environment_id, stop catching-and-logging only — either let createWorkItem throw so the route returns an error, or perform an explicit rollback (call a session deletion function such as deleteSession(session.id) or markSessionFailed(session.id)) and then return/throw an error response. Update the code around createWorkItem, session, and body.environment_id to remove the silent console.error and ensure the route signals failure (or cleans up) when createWorkItem fails.packages/remote-control-server/src/routes/v1/session-ingress.ts-18-21 (1)
18-21:⚠️ Potential issue | 🟠 MajorKeep URL-token auth out of the POST ingress path.
authenticateRequest()is reused by both handlers, soPOST /session/:sessionId/eventsnow accepts API keys and worker JWTs via?token=. That is a credential-leak footgun for a normal HTTP endpoint. If the browser WS path needs a URL-token exception, scope it there instead of giving the POST route the same behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/routes/v1/session-ingress.ts` around lines 18 - 21, The authenticateRequest function currently accepts tokens from the URL query (?token=) and is reused by both HTTP POST and WebSocket ingress handlers, which lets POST /session/:sessionId/events accept credentials via query string; change authenticateRequest(c, label, expectedSessionId?) to stop reading queryToken (remove queryToken = c.req.query("token") and token fallback from query) so it only uses the Authorization header, and create a separate helper (e.g., authenticateWebSocketRequest or add a boolean flag like allowQueryToken) for the WS ingress that explicitly reads c.req.query("token") and calls the shared validation logic; update callers to use the header-only authenticateRequest for the POST route and the WS-specific variant for the browser WS path.packages/remote-control-server/src/routes/v1/sessions.ts-69-82 (1)
69-82:⚠️ Potential issue | 🟠 MajorReject events for unknown sessions before publishing.
publishSessionEvent()creates anEventBuson demand, so this route currently accepts events for arbitrary IDs and leaves orphan in-memory buses behind. Mirror the not-found guard used byGET /:idbefore publishing.🔧 Suggested guard
app.post("/:id/events", acceptCliHeaders, apiKeyAuth, async (c) => { const sessionId = c.req.param("id"); + if (!getSession(sessionId)) { + return c.json({ error: { type: "not_found", message: "Session not found" } }, 404); + } const body = await c.req.json();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/routes/v1/sessions.ts` around lines 69 - 82, The POST /:id/events handler currently calls publishSessionEvent for any id, creating orphan EventBuses; before publishing, check session existence using the same not-found guard used by the GET /:id route (i.e., look up the session by sessionId with the same helper/store used by GET /:id), and if the session does not exist return a 404 response immediately; only iterate events and call publishSessionEvent(sessionId, ...) when the session lookup succeeds.packages/remote-control-server/src/auth/middleware.ts-7-10 (1)
7-10:⚠️ Potential issue | 🟠 MajorKeep query-string tokens out of the shared HTTP auth helper.
extractBearerToken()is reused by normal API routes, so it currently allows API keys and bearer tokens to travel in?token=where server access logs, proxy logs, and browser history will capture them. Restrict URL-token auth to the few transports that cannot send headers, and make the default path header-only.🔒 Suggested tightening
-function extractBearerToken(c: Context): string | undefined { +function extractBearerToken(c: Context, allowQueryToken = false): string | undefined { const authHeader = c.req.header("Authorization"); - const queryToken = c.req.query("token"); - return authHeader?.replace("Bearer ", "") || queryToken; + if (authHeader?.startsWith("Bearer ")) { + return authHeader.slice("Bearer ".length); + } + return allowQueryToken ? c.req.query("token") : undefined; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/auth/middleware.ts` around lines 7 - 10, The helper extractBearerToken currently accepts tokens from both the Authorization header and the URL query string; restrict it to header-only by removing the queryToken fallback so extractBearerToken(c: Context) returns only the Authorization Bearer value (authHeader?.replace("Bearer ", "")). For transports that must accept ?token= (special cases), add a new function (e.g., extractBearerTokenFromQuery or extractTokenForInsecureTransport) and call that only from the few transport-specific handlers rather than from shared API routes; update callers of extractBearerToken if they relied on query tokens to use the new query-only helper where appropriate.packages/remote-control-server/src/routes/v1/session-ingress.ts-57-58 (1)
57-58:⚠️ Potential issue | 🟠 MajorValidate the ingress body before reading
events.
c.req.json()plus the immediatebody.eventsaccess will throw on malformed JSON or a literalnull, which turns client input errors into 500s. This also misses the{ events: <single object> }shape and ingests the wrapper object instead.🔧 Suggested normalization
- const body = await c.req.json(); - const events = Array.isArray(body.events) ? body.events : [body]; + let body: unknown; + try { + body = await c.req.json(); + } catch { + return c.json({ error: { type: "bad_request", message: "Invalid JSON body" } }, 400); + } + + if (!body || typeof body !== "object") { + return c.json({ error: { type: "bad_request", message: "Body must be an object" } }, 400); + } + + const requestBody = body as Record<string, unknown>; + const events = Array.isArray(requestBody.events) + ? requestBody.events + : requestBody.events !== undefined + ? [requestBody.events] + : [requestBody];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/routes/v1/session-ingress.ts` around lines 57 - 58, Wrap the c.req.json() call in a try/catch and validate the parsed body before accessing body.events: if parsing fails or the parsed value is null/not an object, return a 400 error; otherwise normalize events so that if body is an array use it as events, if body.events exists and is an array use that, if body.events is a single object wrap it in an array, and if body itself is a single event object wrap it in an array—replace the existing lines that set body and events with this validation/normalization around c.req.json(), checking for null and using Array.isArray when inspecting body or body.events.packages/remote-control-server/src/transport/sse-writer.ts-53-84 (1)
53-84:⚠️ Potential issue | 🟠 MajorSubscribe before replaying, or resumed SSE clients can miss events.
getEventsSince(fromSeqNum)runs before the live subscription is installed, so any event published in that gap is neither replayed nor streamed. That breaks resume semantics and can leave the web UI permanently missing a sequence number until the next reconnect. Please make replay + live subscription atomic, or subscribe first and buffer new events until the replay finishes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/transport/sse-writer.ts` around lines 53 - 84, The replay currently calls getEventsSince(fromSeqNum) before installing the live subscriber so events published in the gap are lost; change the logic in the SSE writer to subscribe first via bus.subscribe((event) => ...) (referencing the existing bus.subscribe, unsub, sessionId, controller.enqueue, encoder callbacks), buffer incoming events received while replay is in progress, then call getEventsSince(fromSeqNum) and flush the buffered events after replaying the historical events (ensuring duplicate seqNums are deduplicated or ignored). Ensure the subscribe callback uses the same serialized format as the replay, and only calls controller.enqueue for live events once the replay has been fully flushed; call unsub() in the catch/close paths as before.packages/remote-control-server/web/app.js-160-198 (1)
160-198:⚠️ Potential issue | 🟠 MajorIgnore stale
renderSessionDetail()completions after navigation.This function does two async fetches and then unconditionally rewrites the DOM and calls
connectSSE(). If the user switches sessions while either request is in flight, the older render can win the race, overwrite the newer view, and attach the stream to the wrong session. Please guard the post-awaitwork with a render token or acurrentSessionId === idcheck before mutating the UI or connecting SSE.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/web/app.js` around lines 160 - 198, renderSessionDetail can race with navigation because it mutates the DOM and calls connectSSE after awaiting network calls; guard those post-await actions by verifying the render token or currentSessionId matches the id passed in before any DOM updates or calling connectSSE. Specifically, after the await apiFetchSession(id) and again after awaiting apiFetchSessionHistory(id), check that currentSessionId === id (or a per-call renderToken created at function start) and abort early if it no longer matches; ensure you do this before updating elements (session-title, session-id, session-env, session-time, session-status, event-stream, permission-area) and before calling connectSSE so an older response cannot overwrite a newer session or attach SSE to the wrong id.packages/remote-control-server/src/transport/ws-handler.ts-133-137 (1)
133-137:⚠️ Potential issue | 🟠 MajorDon't log raw bridge payload fragments.
Both debug statements include user/assistant/tool payload content. Remote-control sessions can carry prompts, source code, and secrets, so these snippets will leak sensitive data into server logs. Please log IDs/types/seq numbers only, or gate full payload logging behind an explicit redaction-free debug mode.
Also applies to: 218-218
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/transport/ws-handler.ts` around lines 133 - 137, The current console.log in ws-handler.ts prints raw bridge payload fragments (sdkMsg) and the catch logs print the error which may include payloads; update the logging in the outbound send path (the block that logs `[RC-DEBUG] [WS] -> bridge (outbound): type=${event.type} len=${sdkMsg.length} msg=${sdkMsg.slice(0, 300)}` and the catch that logs the error) to never include raw message content by default—log only non-sensitive metadata such as event.type, sdkMsg.length, connection/session IDs and any sequence numbers; if you need full payloads, gate them behind an explicit redaction-free debug flag (e.g., an env var or config check) and ensure the catch path similarly avoids printing full payloads or sensitive fields.packages/remote-control-server/src/services/work-dispatch.ts-81-90 (1)
81-90:⚠️ Potential issue | 🟠 Major
heartbeatWork()currently succeeds for missing work IDs.If
storeUpdateWorkItem()fails, the function still returnslease_extended: truewith a fabricated fallback state. That makes expired or mistyped work IDs look healthy and can keep zombie workers running against nonexistent leases.🩹 Suggested fix
export function heartbeatWork(workId: string): { lease_extended: boolean; state: string; last_heartbeat: string; ttl_seconds: number } { - storeUpdateWorkItem(workId, {} as any); // just bump updatedAt - const item = storeGetWorkItem(workId); + const updated = storeUpdateWorkItem(workId, {}); + const item = storeGetWorkItem(workId); + if (!updated || !item) { + throw new Error(`Work item ${workId} not found`); + } const now = new Date(); return { lease_extended: true, - state: item?.state ?? "acked", + state: item.state, last_heartbeat: now.toISOString(), ttl_seconds: config.heartbeatInterval * 2, }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/services/work-dispatch.ts` around lines 81 - 90, The heartbeatWork function should fail when the work ID is missing or the store update fails instead of returning lease_extended: true; update heartbeatWork to call storeUpdateWorkItem(workId) inside a try/catch (or check its return value) and if it throws or the subsequent storeGetWorkItem(workId) returns null/undefined return lease_extended: false with a clear failure state (e.g., "not_found" or "missing") and ttl_seconds: 0; otherwise proceed to extend the lease as currently implemented, using the actual item.state from storeGetWorkItem.packages/remote-control-server/src/store.ts-227-265 (1)
227-265:⚠️ Potential issue | 🟠 MajorWork-item polling gets slower the longer the server runs.
storeGetPendingWorkItem()linearly scans every work item, and this module never prunes terminal entries. On a long-lived self-hosted server, environment polling becomes O(total historical work) and memory grows without bound. Keep a per-environment pending index, or delete completed/cancelled items once they leave the queue.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/store.ts` around lines 227 - 265, storeGetPendingWorkItem currently does a full scan of workItems and there is no pruning, causing O(total historical work) reads and unbounded memory growth; fix by maintaining a per-environment pending index and/or removing terminal items: update storeCreateWorkItem to add new id to a Map<string, Set<string>> keyed by environmentId, change storeGetPendingWorkItem to consult that set to quickly return a pending WorkItemRecord (look up id in workItems), and modify storeUpdateWorkItem to remove ids from the per-environment set (or delete from workItems entirely) when state becomes terminal (e.g., "completed"/"cancelled"), ensuring the index is kept in sync and completed entries are pruned.packages/remote-control-server/src/store.ts-89-123 (1)
89-123: 🛠️ Refactor suggestion | 🟠 MajorDon't leak live environment/session/work-item records from the store.
These APIs return the same objects held in the Maps. Any caller can mutate
status,secret, timestamps, or even IDs without going throughstoreUpdate*, which bypasses invariants andupdatedAtbookkeeping. Return cloned or readonly records at the store boundary instead.Also applies to: 133-139, 143-170, 180-189, 214-223, 227-248
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/store.ts` around lines 89 - 123, The store is returning live objects from internal Maps (e.g., environments) which allows external mutation; change storeCreateEnvironment to keep the stored EnvironmentRecord in the environments Map but return a shallow-clone or frozen copy (e.g., return {...record} or Object.freeze({...record}) ) instead of the stored instance, and update storeGetEnvironment (and all similar getters: storeGetSession, storeGetWorkItem, plus create functions noted) to likewise return clones/readonly copies; ensure only storeUpdate* functions mutate the stored objects and still update updatedAt so callers cannot bypass invariants by mutating returned objects.packages/remote-control-server/src/store.ts-192-223 (1)
192-223:⚠️ Potential issue | 🟠 MajorKeep secondary indexes in sync when a session disappears.
storeDeleteSession()only removes the primary row, andstoreBindSession()will happily create owners for an unknownsessionId. That leaves stale auth state wherestoreIsSessionOwner()can stay true after deletion, and orphaned work items can still point at the dead session.🧩 Proposed fix
export function storeDeleteSession(id: string): boolean { - return sessions.delete(id); + const existed = sessions.delete(id); + sessionOwners.delete(id); + for (const [workItemId, workItem] of workItems) { + if (workItem.sessionId === id) { + workItems.delete(workItemId); + } + } + return existed; } @@ -export function storeBindSession(sessionId: string, uuid: string): void { +export function storeBindSession(sessionId: string, uuid: string): boolean { + if (!sessions.has(sessionId)) return false; let owners = sessionOwners.get(sessionId); if (!owners) { owners = new Set(); sessionOwners.set(sessionId, owners); } owners.add(uuid); + return true; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/store.ts` around lines 192 - 223, storeDeleteSession currently only removes the primary sessions row, leaving the sessionOwners secondary index and any work-item references stale; change storeDeleteSession(id) to also remove the session's entry from sessionOwners (sessionOwners.delete(id)) and to scan/update any work-item structures that reference the sessionId (e.g., remove or null out session pointers in the work items map) so no orphaned pointers remain. Also harden storeBindSession(sessionId, uuid) so it first checks sessions.has(sessionId) and no-ops (or returns false) if the session does not exist, preventing creation of owners for unknown sessions; keep references to the symbols sessionOwners, sessions, storeDeleteSession, storeBindSession, storeIsSessionOwner, and storeListSessionsByOwnerUuid to locate the affected code.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 912268df-21ef-4f49-add5-9d99e7e89e1b
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (77)
.dockerignore.github/workflows/release-rcs.yml.gitignorepackage.jsonpackages/remote-control-server/.gitignorepackages/remote-control-server/Dockerfilepackages/remote-control-server/TODO.mdpackages/remote-control-server/package.jsonpackages/remote-control-server/src/__tests__/auth.test.tspackages/remote-control-server/src/__tests__/disconnect-monitor.test.tspackages/remote-control-server/src/__tests__/event-bus.test.tspackages/remote-control-server/src/__tests__/middleware.test.tspackages/remote-control-server/src/__tests__/routes.test.tspackages/remote-control-server/src/__tests__/services.test.tspackages/remote-control-server/src/__tests__/sse-writer.test.tspackages/remote-control-server/src/__tests__/store.test.tspackages/remote-control-server/src/__tests__/work-dispatch.test.tspackages/remote-control-server/src/__tests__/ws-handler.test.tspackages/remote-control-server/src/auth/api-key.tspackages/remote-control-server/src/auth/jwt.tspackages/remote-control-server/src/auth/middleware.tspackages/remote-control-server/src/auth/token.tspackages/remote-control-server/src/config.tspackages/remote-control-server/src/index.tspackages/remote-control-server/src/routes/v1/environments.tspackages/remote-control-server/src/routes/v1/environments.work.tspackages/remote-control-server/src/routes/v1/session-ingress.tspackages/remote-control-server/src/routes/v1/sessions.tspackages/remote-control-server/src/routes/v2/code-sessions.tspackages/remote-control-server/src/routes/v2/worker-events-stream.tspackages/remote-control-server/src/routes/v2/worker-events.tspackages/remote-control-server/src/routes/v2/worker.tspackages/remote-control-server/src/routes/web/auth.tspackages/remote-control-server/src/routes/web/control.tspackages/remote-control-server/src/routes/web/environments.tspackages/remote-control-server/src/routes/web/sessions.tspackages/remote-control-server/src/services/disconnect-monitor.tspackages/remote-control-server/src/services/environment.tspackages/remote-control-server/src/services/session.tspackages/remote-control-server/src/services/transport.tspackages/remote-control-server/src/services/work-dispatch.tspackages/remote-control-server/src/store.tspackages/remote-control-server/src/transport/event-bus.tspackages/remote-control-server/src/transport/sse-writer.tspackages/remote-control-server/src/transport/ws-handler.tspackages/remote-control-server/src/types/api.tspackages/remote-control-server/src/types/messages.tspackages/remote-control-server/test-write.txtpackages/remote-control-server/tsconfig.jsonpackages/remote-control-server/web/api.jspackages/remote-control-server/web/app.jspackages/remote-control-server/web/base.csspackages/remote-control-server/web/components.csspackages/remote-control-server/web/index.htmlpackages/remote-control-server/web/messages.csspackages/remote-control-server/web/pages.csspackages/remote-control-server/web/render.jspackages/remote-control-server/web/sse.jspackages/remote-control-server/web/style.csspackages/remote-control-server/web/utils.jsscripts/rcs.tssrc/bridge/bridgeApi.tssrc/bridge/bridgeConfig.tssrc/bridge/bridgeEnabled.tssrc/bridge/bridgeMain.tssrc/bridge/bridgeMessaging.tssrc/bridge/bridgeStatusUtil.tssrc/bridge/createSession.tssrc/bridge/initReplBridge.tssrc/bridge/rcDebugLog.tssrc/bridge/replBridge.tssrc/cli/transports/HybridTransport.tssrc/cli/transports/SSETransport.tssrc/cli/transports/WebSocketTransport.tssrc/components/App.tsxsrc/constants/product.tssrc/entrypoints/cli.tsx
| export function registerEnvironment(req: RegisterEnvironmentRequest & { metadata?: { worker_type?: string }; username?: string }) { | ||
| const secret = config.apiKeys[0] || ""; | ||
| const workerType = req.worker_type || req.metadata?.worker_type; | ||
| const record = storeCreateEnvironment({ | ||
| secret, | ||
| machineName: req.machine_name, | ||
| directory: req.directory, | ||
| branch: req.branch, | ||
| gitRepoUrl: req.git_repo_url, | ||
| maxSessions: req.max_sessions, | ||
| workerType, | ||
| bridgeId: req.bridge_id, | ||
| username: req.username, | ||
| }); | ||
|
|
||
| return { environment_id: record.id, environment_secret: record.secret, status: record.status as "active" }; |
There was a problem hiding this comment.
Do not hand out the global API key as every environment_secret.
Every environment registered here receives config.apiKeys[0], so compromising one worker compromises every environment and revocation becomes all-or-nothing. The || "" fallback also allows registration to succeed with an empty credential. environment_secret needs to be a fresh per-environment secret, not a copy of the server-wide API key.
🔐 Suggested fix
export function registerEnvironment(req: RegisterEnvironmentRequest & { metadata?: { worker_type?: string }; username?: string }) {
- const secret = config.apiKeys[0] || "";
+ const secret = crypto.randomUUID();
const workerType = req.worker_type || req.metadata?.worker_type;
const record = storeCreateEnvironment({
secret,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/remote-control-server/src/services/environment.ts` around lines 24 -
39, registerEnvironment is using the global config.apiKeys[0] (with a ""
fallback) as the per-environment secret, which leaks the global API key and
allows empty credentials; instead generate a secure unique secret for each
environment and pass that into storeCreateEnvironment (replace usage of
config.apiKeys[0] in registerEnvironment), remove the empty-string fallback, and
use a cryptographically secure generator (e.g. crypto.randomBytes or equivalent)
to create the secret before calling storeCreateEnvironment so each record.id has
its own environment_secret rather than sharing the server-wide key.
| while (Date.now() < deadline) { | ||
| const item = storeGetPendingWorkItem(environmentId); | ||
|
|
||
| if (item) { | ||
| storeUpdateWorkItem(item.id, { state: "dispatched" }); | ||
|
|
||
| return { | ||
| id: item.id, | ||
| type: "work", | ||
| environment_id: environmentId, | ||
| state: "dispatched", | ||
| data: { | ||
| type: "session", | ||
| id: item.sessionId, | ||
| }, | ||
| secret: item.secret, | ||
| created_at: item.createdAt.toISOString(), | ||
| }; | ||
| } |
There was a problem hiding this comment.
Claim pending work atomically.
The storeGetPendingWorkItem() → storeUpdateWorkItem(..., { state: "dispatched" }) sequence is not atomic. Two workers polling the same environment can both read the same pending item before either update lands, so the same session gets dispatched twice.
💡 Suggested direction
- const item = storeGetPendingWorkItem(environmentId);
+ const item = storeClaimPendingWorkItem(environmentId);
if (item) {
- storeUpdateWorkItem(item.id, { state: "dispatched" });
-
return {
id: item.id,
type: "work",
environment_id: environmentId,
- state: "dispatched",
+ state: item.state,This needs a store-level compare-and-swap / “claim” helper so the read and state transition happen together.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/remote-control-server/src/services/work-dispatch.ts` around lines 47
- 65, The current read-then-update in work-dispatch (storeGetPendingWorkItem →
storeUpdateWorkItem(..., { state: "dispatched" })) is racy and can dispatch the
same item twice; replace it with an atomic claim/compare-and-swap at the store
layer (e.g., implement storeClaimPendingWorkItem or a
storeUpdateWorkItemWithExpectedState) so the store performs "find pending item
for environment and set state = 'dispatched' only if current state = 'pending'"
in a single DB transaction/UPDATE ... WHERE ... RETURNING step (or using row
locking). Update the dispatch loop to call that new atomic claim function and
use its returned item (if any) instead of the separate get+update sequence.
| // If there's an existing connection for this session, clean it up first | ||
| const existing = cleanupBySession.get(sessionId); | ||
| if (existing) { | ||
| console.log(`[WS] Replacing existing connection for session=${sessionId}`); | ||
| existing.unsub(); | ||
| clearInterval(existing.keepalive); | ||
| activeConnections.delete(existing.ws); | ||
| } |
There was a problem hiding this comment.
Superseded sockets can still mutate the session and can tear down the replacement connection.
When a second socket takes over a session, the old socket is unsubscribed but left open. Because handleWebSocketMessage() and handleWebSocketClose() never verify that the caller is still the active ws for that sessionId, the stale connection can keep publishing inbound events, and its later close will unsubscribe the new live connection.
🔁 Suggested fix
const existing = cleanupBySession.get(sessionId);
if (existing) {
console.log(`[WS] Replacing existing connection for session=${sessionId}`);
existing.unsub();
clearInterval(existing.keepalive);
activeConnections.delete(existing.ws);
+ try {
+ if (existing.ws.readyState === 1) {
+ existing.ws.close(1008, "replaced");
+ }
+ } catch {
+ // ignore close errors while replacing
+ }
}
@@
export function handleWebSocketMessage(ws: WSContext, sessionId: string, data: string) {
+ const active = cleanupBySession.get(sessionId);
+ if (!active || active.ws !== ws) return;
const lines = data.split("\n").filter((l) => l.trim());
@@
export function handleWebSocketClose(ws: WSContext, sessionId: string, code?: number, reason?: string) {
activeConnections.delete(ws);
const entry = cleanupBySession.get(sessionId);
+ if (!entry || entry.ws !== ws) return;
const duration = entry ? Math.round((Date.now() - entry.openTime) / 1000) : -1;Also applies to: 158-166, 170-182
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/remote-control-server/src/transport/ws-handler.ts` around lines 102
- 109, Stale sockets can still mutate session state after being superseded
because handlers don't verify they're the current socket; when replacing a
connection in cleanupBySession for a sessionId, fully terminate the old socket
(call existing.ws.close() or similar) and remove its mapping and
activeConnections, and update cleanupBySession to point to the new {ws, unsub,
keepalive}. Then, in handleWebSocketMessage and handleWebSocketClose, first
check that cleanupBySession.get(sessionId)?.ws === ws (the caller) before
performing any session mutations or unsubscribes; apply the same guard/update
logic for the other affected blocks (near lines 158–166 and 170–182) to prevent
a stale connection from affecting the new one.
| el.innerHTML = ` | ||
| <div class="perm-title">Permission Request</div> | ||
| ${description ? `<div class="perm-desc">${esc(description)}</div>` : ""} | ||
| <div class="perm-tool-name"><strong>${esc(toolName)}</strong></div> | ||
| ${toolName !== "AskUserQuestion" ? `<div class="perm-tool">${esc(truncate(inputStr, 500))}</div>` : ""} | ||
| <div class="perm-actions"> | ||
| <button class="btn-approve" onclick="window._approvePerm('${esc(requestId)}', this)">Approve</button> | ||
| <button class="btn-reject" onclick="window._rejectPerm('${esc(requestId)}', this)">Reject</button> | ||
| </div>`; |
There was a problem hiding this comment.
Stop interpolating request IDs into inline onclick handlers.
These blocks embed requestId into JavaScript string literals inside HTML attributes. esc() is an HTML escaper, not a JS-string escaper, so a crafted request ID can break out after entity decoding and execute script in the remote-control UI. Please move the IDs into data-* attributes and bind the buttons with addEventListener() instead.
Also applies to: 381-383, 417-421, 460-461, 484-485
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/remote-control-server/web/render.js` around lines 336 - 344, The
current template injects requestId into inline onclick handlers (el.innerHTML
using window._approvePerm/window._rejectPerm), which is unsafe; instead, put the
requestId into data-request-id attributes on the approve/reject buttons (e.g.,
<button class="btn-approve" data-request-id="${esc(requestId)}">) and remove
inline onclicks, then immediately after setting el.innerHTML querySelector the
buttons (e.g., el.querySelector('.btn-approve') / '.btn-reject') and attach
event listeners with addEventListener that call
window._approvePerm(btn.dataset.requestId, btn) and window._rejectPerm(...).
Apply the same change pattern to the other occurrences you flagged (the blocks
at ~381-383, 417-421, 460-461, 484-485) so no requestId is interpolated into
inline JS.
| rcLog( | ||
| `SSE handleConnectionError: state=${this.state}` + | ||
| ` lastSeqNum=${this.getLastSequenceNum()}` + | ||
| ` reconnectAttempts=${this.reconnectAttempts}` + | ||
| ` msSinceLastActivity=${this.lastActivityTime > 0 ? Date.now() - this.lastActivityTime : -1}`, | ||
| ) |
There was a problem hiding this comment.
lastActivityTime property does not exist on SSETransport.
The rcLog statement references this.lastActivityTime, but this property is never declared in SSETransport. Unlike WebSocketTransport (which has private lastActivityTime = 0 at line 97), SSETransport only tracks lastSequenceNum.
This will produce undefined, causing the calculation Date.now() - this.lastActivityTime to result in NaN.
🐛 Proposed fix: Add the missing property or use an alternative
Option 1: Add the missing property and update it in resetLivenessTimer:
private livenessTimer: NodeJS.Timeout | null = null
+ private lastActivityTime = 0
// ...
private resetLivenessTimer(): void {
this.clearLivenessTimer()
+ this.lastActivityTime = Date.now()
this.livenessTimer = setTimeout(this.onLivenessTimeout, LIVENESS_TIMEOUT_MS)
}Option 2: Remove the undefined reference from the log:
rcLog(
`SSE handleConnectionError: state=${this.state}` +
` lastSeqNum=${this.getLastSequenceNum()}` +
- ` reconnectAttempts=${this.reconnectAttempts}` +
- ` msSinceLastActivity=${this.lastActivityTime > 0 ? Date.now() - this.lastActivityTime : -1}`,
+ ` reconnectAttempts=${this.reconnectAttempts}`,
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| rcLog( | |
| `SSE handleConnectionError: state=${this.state}` + | |
| ` lastSeqNum=${this.getLastSequenceNum()}` + | |
| ` reconnectAttempts=${this.reconnectAttempts}` + | |
| ` msSinceLastActivity=${this.lastActivityTime > 0 ? Date.now() - this.lastActivityTime : -1}`, | |
| ) | |
| rcLog( | |
| `SSE handleConnectionError: state=${this.state}` + | |
| ` lastSeqNum=${this.getLastSequenceNum()}` + | |
| ` reconnectAttempts=${this.reconnectAttempts}`, | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/cli/transports/SSETransport.ts` around lines 472 - 477, SSETransport's
rcLog references a non-existent lastActivityTime causing NaN; add a private
lastActivityTime = 0 property to the SSETransport class and update it to
Date.now() inside the resetLivenessTimer method (the same place
WebSocketTransport updates lastActivityTime) so the rcLog calculation
(`Date.now() - this.lastActivityTime`) is valid; ensure the property name
matches exactly and leave the existing rcLog call (and getLastSequenceNum,
reconnectAttempts, state references) unchanged.
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/remote-control-server/src/index.ts`:
- Around line 94-101: gracefulShutdown currently calls closeAllConnections()
synchronously and exits immediately, which can terminate before WebSocket close
frames flush; modify shutdown so it lives alongside the Bun.serve() instance and
awaits connection teardown: change closeAllConnections to return a Promise that
resolves when all tracked sockets emit 'close' (or after a configurable
timeout), call await closeAllConnections() inside gracefulShutdown, then call
the Bun.serve server's stop/close API (the server object returned by Bun.serve)
to stop accepting new requests, and only after both awaiting server.stop() and
the connections-promise resolve call process.exit(0); ensure gracefulShutdown
references the server variable and do not call process.exit until awaits
complete.
- Around line 56-63: The route prefixes are inverted: v1SessionIngress (imported
from ./routes/v1/session-ingress) is mounted at "/v2/session_ingress" and the v2
handlers (v2CodeSessions, v2Worker, v2WorkerEventsStream, v2WorkerEvents) are
mounted at "/v1/code/sessions"; change the mounts so v1SessionIngress is
registered at "/v1/session_ingress" and the four v2 handlers are registered at
"/v2/code/sessions" to restore the correct API surface.
- Around line 8-9: Replace Node-specific runtime imports and process signal
handling with Bun APIs: remove usage/imports of dirname, resolve from
"node:path" and fileURLToPath from "node:url" and replace any occurrences of
dirname(fileURLToPath(import.meta.url)) with import.meta.dir; update any path
resolution that used resolve(...) to use new path joining relative to
import.meta.dir or Bun.path utilities as appropriate; replace process.on(...)
signal handlers with Bun.signal or Bun.serve signal APIs (e.g.,
Bun.signal("SIGINT", ...) or Bun.serve's lifecycle hooks) so wherever process.on
is used, switch to Bun.signal for the same signals and handlers. Ensure all
references in this file to dirname, fileURLToPath, resolve, and process.on are
removed and replaced with import.meta.dir and Bun signal equivalents.
In `@scripts/rcs.ts`:
- Around line 13-14: The two console.log lines in scripts/rcs.ts reference
non-existent config.jwtSecret and config.dbPath, causing undefined logs and
TypeScript errors; remove those console.log statements or replace them with
existing config fields from the config export, or alternatively add jwtSecret
and dbPath to the config definition if those values are actually required —
locate the references to config.jwtSecret and config.dbPath in scripts/rcs.ts
and either delete those lines or update them to use valid properties from the
config object (or add corresponding properties to the config module).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6e9eda5d-3439-4b80-921d-521e1731da30
📒 Files selected for processing (2)
packages/remote-control-server/src/index.tsscripts/rcs.ts
| import { dirname, resolve } from "node:path"; | ||
| import { fileURLToPath } from "node:url"; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/remote-control-server/src/index.ts | head -120Repository: claude-code-best/claude-code
Length of output: 4349
🏁 Script executed:
# Check if there are any patterns in other files for how Bun path/file handling should be done
rg "import\.meta\.(dir|file)" --type ts --type tsx -A 2 -B 2 | head -50Repository: claude-code-best/claude-code
Length of output: 99
🏁 Script executed:
# Check if there are any patterns in other files for how Bun path/file handling should be done
rg "import\.meta\.(dir|file)" -t ts -A 2 -B 2 | head -50Repository: claude-code-best/claude-code
Length of output: 54
🏁 Script executed:
# Check what the CLAUDE.md guidelines say about Bun APIs
cat CLAUDE.md | grep -A 10 -B 2 "Bun API"Repository: claude-code-best/claude-code
Length of output: 1073
🏁 Script executed:
# Check if remote-control-server is part of workspace and what its configuration is
cat packages/remote-control-server/package.json | head -30Repository: claude-code-best/claude-code
Length of output: 734
🏁 Script executed:
# Check if there are any other Bun entrypoints in the codebase to see how they handle path resolution
find . -name "*.ts" -type f -path "*/entrypoints/*" -o -name "*.ts" -type f -path "*/src/index.ts" | grep -v node_modules | head -20Repository: claude-code-best/claude-code
Length of output: 1014
🏁 Script executed:
# Check how the main CLI entrypoint handles imports
head -50 src/entrypoints/cli.tsx | grep -E "^import|from"Repository: claude-code-best/claude-code
Length of output: 156
🏁 Script executed:
# Check if any other code uses Bun's import.meta.dir
rg "import\.meta\.dir" -t ts | head -20Repository: claude-code-best/claude-code
Length of output: 54
🏁 Script executed:
# Check if remote-control-server is documented anywhere in CLAUDE.md
rg "remote-control-server" CLAUDE.mdRepository: claude-code-best/claude-code
Length of output: 54
🏁 Script executed:
# Verify Bun's native capabilities for path and signal handling
# Check Bun documentation examples in comments or any build config
grep -r "import.meta.dir" . --include="*.ts" --include="*.tsx" --include="*.md" 2>/dev/null | head -5Repository: claude-code-best/claude-code
Length of output: 54
Use Bun APIs instead of Node runtime imports and process methods.
Lines 8–9 import dirname, resolve from node:path and fileURLToPath from node:url. Lines 100–101 use process.on() for signal handling. Per the Bun-only guideline applied to all TS files, replace these with Bun-native equivalents:
- Replace
dirname(fileURLToPath(import.meta.url))withimport.meta.dir - Replace
process.on()with Bun's signal handling API
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/remote-control-server/src/index.ts` around lines 8 - 9, Replace
Node-specific runtime imports and process signal handling with Bun APIs: remove
usage/imports of dirname, resolve from "node:path" and fileURLToPath from
"node:url" and replace any occurrences of
dirname(fileURLToPath(import.meta.url)) with import.meta.dir; update any path
resolution that used resolve(...) to use new path joining relative to
import.meta.dir or Bun.path utilities as appropriate; replace process.on(...)
signal handlers with Bun.signal or Bun.serve signal APIs (e.g.,
Bun.signal("SIGINT", ...) or Bun.serve's lifecycle hooks) so wherever process.on
is used, switch to Bun.signal for the same signals and handlers. Ensure all
references in this file to dirname, fileURLToPath, resolve, and process.on are
removed and replaced with import.meta.dir and Bun signal equivalents.
| // v1 Session Ingress (WebSocket) | ||
| app.route("/v2/session_ingress", v1SessionIngress); | ||
|
|
||
| // v2 Code Sessions routes | ||
| app.route("/v1/code/sessions", v2CodeSessions); | ||
| app.route("/v1/code/sessions", v2Worker); | ||
| app.route("/v1/code/sessions", v2WorkerEventsStream); | ||
| app.route("/v1/code/sessions", v2WorkerEvents); |
There was a problem hiding this comment.
Fix the swapped v1/v2 route prefixes.
./routes/v1/session-ingress is mounted under /v2/..., while the ./routes/v2/* handlers are mounted under /v1/.... That inverts the published API surface and will send clients to the wrong endpoints.
Suggested prefix correction
- app.route("/v2/session_ingress", v1SessionIngress);
+ app.route("/v1/session_ingress", v1SessionIngress);
- app.route("/v1/code/sessions", v2CodeSessions);
- app.route("/v1/code/sessions", v2Worker);
- app.route("/v1/code/sessions", v2WorkerEventsStream);
- app.route("/v1/code/sessions", v2WorkerEvents);
+ app.route("/v2/code/sessions", v2CodeSessions);
+ app.route("/v2/code/sessions", v2Worker);
+ app.route("/v2/code/sessions", v2WorkerEventsStream);
+ app.route("/v2/code/sessions", v2WorkerEvents);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // v1 Session Ingress (WebSocket) | |
| app.route("/v2/session_ingress", v1SessionIngress); | |
| // v2 Code Sessions routes | |
| app.route("/v1/code/sessions", v2CodeSessions); | |
| app.route("/v1/code/sessions", v2Worker); | |
| app.route("/v1/code/sessions", v2WorkerEventsStream); | |
| app.route("/v1/code/sessions", v2WorkerEvents); | |
| // v1 Session Ingress (WebSocket) | |
| app.route("/v1/session_ingress", v1SessionIngress); | |
| // v2 Code Sessions routes | |
| app.route("/v2/code/sessions", v2CodeSessions); | |
| app.route("/v2/code/sessions", v2Worker); | |
| app.route("/v2/code/sessions", v2WorkerEventsStream); | |
| app.route("/v2/code/sessions", v2WorkerEvents); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/remote-control-server/src/index.ts` around lines 56 - 63, The route
prefixes are inverted: v1SessionIngress (imported from
./routes/v1/session-ingress) is mounted at "/v2/session_ingress" and the v2
handlers (v2CodeSessions, v2Worker, v2WorkerEventsStream, v2WorkerEvents) are
mounted at "/v1/code/sessions"; change the mounts so v1SessionIngress is
registered at "/v1/session_ingress" and the four v2 handlers are registered at
"/v2/code/sessions" to restore the correct API surface.
| async function gracefulShutdown(signal: string) { | ||
| console.log(`\n[RCS] Received ${signal}, shutting down...`); | ||
| closeAllConnections(); | ||
| process.exit(0); | ||
| } | ||
|
|
||
| process.on("SIGINT", () => gracefulShutdown("SIGINT")); | ||
| process.on("SIGTERM", () => gracefulShutdown("SIGTERM")); |
There was a problem hiding this comment.
Don't exit the process immediately after initiating WebSocket closes.
closeAllConnections() is synchronous and only queues ws.close(...), so Line 97 can kill the process before close frames and in-flight responses flush. The shutdown orchestration needs to live next to Bun.serve() so the server can be stopped cleanly before exiting.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/remote-control-server/src/index.ts` around lines 94 - 101,
gracefulShutdown currently calls closeAllConnections() synchronously and exits
immediately, which can terminate before WebSocket close frames flush; modify
shutdown so it lives alongside the Bun.serve() instance and awaits connection
teardown: change closeAllConnections to return a Promise that resolves when all
tracked sockets emit 'close' (or after a configurable timeout), call await
closeAllConnections() inside gracefulShutdown, then call the Bun.serve server's
stop/close API (the server object returned by Bun.serve) to stop accepting new
requests, and only after both awaiting server.stop() and the connections-promise
resolve call process.exit(0); ensure gracefulShutdown references the server
variable and do not call process.exit until awaits complete.
| console.log(`[RCS] JWT Secret: ${config.jwtSecret === "change-me-in-production" ? "default (set RCS_JWT_SECRET)" : "custom"}`); | ||
| console.log(`[RCS] DB: ${config.dbPath}`); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check if the files exist and read their content
echo "=== Checking scripts/rcs.ts ===" && \
wc -l scripts/rcs.ts 2>/dev/null || echo "File not found" && \
cat -n scripts/rcs.ts 2>/dev/null || echo "Cannot read file"Repository: claude-code-best/claude-code
Length of output: 906
🏁 Script executed:
# Check the config file
echo "=== Checking packages/remote-control-server/src/config.ts ===" && \
wc -l packages/remote-control-server/src/config.ts 2>/dev/null || echo "File not found" && \
cat -n packages/remote-control-server/src/config.ts 2>/dev/null || echo "Cannot read file"Repository: claude-code-best/claude-code
Length of output: 996
Remove references to non-existent config fields.
config.jwtSecret and config.dbPath do not exist in packages/remote-control-server/src/config.ts. Lines 13–14 of scripts/rcs.ts access fields that aren't defined, resulting in undefined being logged and TypeScript type errors. Either add these fields to the config object or remove these log statements.
Remove the non-existent config field logs
console.log(`[RCS] Starting Remote Control Server...`);
console.log(`[RCS] Port: ${config.port}`);
console.log(`[RCS] API Key configuration loaded`);
-console.log(`[RCS] JWT Secret: ${config.jwtSecret === "change-me-in-production" ? "default (set RCS_JWT_SECRET)" : "custom"}`);
-console.log(`[RCS] DB: ${config.dbPath}`);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| console.log(`[RCS] JWT Secret: ${config.jwtSecret === "change-me-in-production" ? "default (set RCS_JWT_SECRET)" : "custom"}`); | |
| console.log(`[RCS] DB: ${config.dbPath}`); | |
| console.log(`[RCS] Starting Remote Control Server...`); | |
| console.log(`[RCS] Port: ${config.port}`); | |
| console.log(`[RCS] API Key configuration loaded`); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/rcs.ts` around lines 13 - 14, The two console.log lines in
scripts/rcs.ts reference non-existent config.jwtSecret and config.dbPath,
causing undefined logs and TypeScript errors; remove those console.log
statements or replace them with existing config fields from the config export,
or alternatively add jwtSecret and dbPath to the config definition if those
values are actually required — locate the references to config.jwtSecret and
config.dbPath in scripts/rcs.ts and either delete those lines or update them to
use valid properties from the config object (or add corresponding properties to
the config module).
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (3)
packages/remote-control-server/src/index.ts (3)
60-63:⚠️ Potential issue | 🟠 MajorMount
v2handlers on av2prefix to avoid API version mismatch.Line [60]-[63] wires
v2CodeSessions/v2Worker*under"/v1/code/sessions". This likely exposes the wrong versioned endpoints to clients.Suggested patch
-app.route("/v1/code/sessions", v2CodeSessions); -app.route("/v1/code/sessions", v2Worker); -app.route("/v1/code/sessions", v2WorkerEventsStream); -app.route("/v1/code/sessions", v2WorkerEvents); +app.route("/v2/code/sessions", v2CodeSessions); +app.route("/v2/code/sessions", v2Worker); +app.route("/v2/code/sessions", v2WorkerEventsStream); +app.route("/v2/code/sessions", v2WorkerEvents);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/index.ts` around lines 60 - 63, The v2 handlers are mounted under the v1 path; change the routing so v2CodeSessions, v2Worker, v2WorkerEventsStream, and v2WorkerEvents are mounted under a v2 prefix (e.g., "/v2/code/sessions" or an app.route("/v2", ...) grouping) instead of "/v1/code/sessions" to avoid exposing the wrong API version; update the route strings or wrap the handlers in a v2 router that is mounted at "/v2" so all four symbols are served from the correct versioned path.
94-101:⚠️ Potential issue | 🟠 MajorDo not hard-exit before WebSocket teardown completes.
Line [96]-[97] closes sockets and exits immediately. With current
closeAllConnections(): void, shutdown can terminate before close frames/in-flight responses are flushed.Suggested direction
async function gracefulShutdown(signal: string) { console.log(`\n[RCS] Received ${signal}, shutting down...`); - closeAllConnections(); - process.exit(0); + await closeAllConnections(); // make this Promise-based in ws-handler + Bun.exit(0); }#!/bin/bash set -euo pipefail # Confirm shutdown currently exits immediately and closeAllConnections is synchronous. rg -n 'async function gracefulShutdown|closeAllConnections\(|process\.exit\(' packages/remote-control-server/src/index.ts rg -n 'export function closeAllConnections\(\): void' packages/remote-control-server/src/transport/ws-handler.ts # Expected evidence: # 1) gracefulShutdown calls closeAllConnections then immediate exit. # 2) closeAllConnections is declared void (non-awaitable teardown).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/index.ts` around lines 94 - 101, The shutdown currently calls closeAllConnections() synchronously and exits immediately; change closeAllConnections (exported from ws-handler.ts) to return a Promise<void> that resolves when all WebSocket close frames and in-flight responses are fully flushed, then update gracefulShutdown in index.ts to await that Promise before calling process.exit(0); keep gracefulShutdown async, wire process.on("SIGINT"/"SIGTERM") to call it and handle rejections (e.g., .catch or top-level try/catch) and optionally add a bounded timeout fallback to force-exit if teardown hangs.
8-9:⚠️ Potential issue | 🟠 MajorReplace Node runtime APIs and
processsignal hooks with Bun-native equivalents.Line [8]-[9] and Line [37]-[38] depend on Node-specific path/url helpers, and Line [100]-[101] uses
process.on(...). This conflicts with the Bun-only runtime rule for TS/JS files.Suggested patch
-import { dirname, resolve } from "node:path"; -import { fileURLToPath } from "node:url"; @@ -const __dirname = dirname(fileURLToPath(import.meta.url)); -const webDir = resolve(__dirname, "../web"); +const webDir = `${import.meta.dir}/../web`; @@ -process.on("SIGINT", () => gracefulShutdown("SIGINT")); -process.on("SIGTERM", () => gracefulShutdown("SIGTERM")); +Bun.signal("SIGINT", () => void gracefulShutdown("SIGINT")); +Bun.signal("SIGTERM", () => void gracefulShutdown("SIGTERM"));As per coding guidelines, "
**/*.{ts,tsx,js,jsx}: Use Bun APIs for all imports, builds, and execution (not Node.js APIs)."#!/bin/bash set -euo pipefail # Verify Node runtime APIs are still present in this Bun entrypoint. rg -n 'from "node:(path|url)"|process\.on\(|process\.exit\(' packages/remote-control-server/src/index.ts # Expected after fix: no matches.Also applies to: 37-38, 100-101
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/index.ts` around lines 8 - 9, The file currently imports Node-specific helpers (dirname, resolve from "node:path" and fileURLToPath from "node:url") and installs process signal handlers via process.on; replace these with Bun-native patterns: import path helpers from "path" (or use Bun.path) instead of "node:path", compute the file directory using import.meta.url + URL (avoid fileURLToPath and the fileURLToPath symbol), and swap any process.on('SIGINT'| 'SIGTERM', ...) usages to Bun.on('SIGINT', ...) / Bun.on('SIGTERM', ...). Update any process.exit calls to use Bun.exit if present. Ensure you update references to dirname/resolve usage to work with the new directory calculation and Bun-compatible path import.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/remote-control-server/README.md`:
- Around line 125-129: Add explicit language identifiers to the unlabeled fenced
code blocks in README.md: annotate the Caddy snippet fence (the block containing
"rcs.example.com { reverse_proxy localhost:3000 }") with ```caddyfile and
annotate the ASCII diagram block (the multi-line box/diagram) with a neutral
label such as ```text (also apply the same change to the similar fences
mentioned for lines 133-146) so markdownlint MD040 warnings are resolved.
- Around line 22-23: The README currently contradicts itself by showing a
persistent volume mount "-v rcs-data:/app/data" in the Docker run examples while
later stating the server is in-memory and data is cleared on restart; update the
sections referencing the volume examples (the Docker run snippet containing "-v
rcs-data:/app/data" and the block at lines ~95-99) to either remove the volume
examples or explicitly state that mounting a host volume will persist data
across restarts and is optional, and modify the later note (around the in-memory
data statement at ~149) to clarify that without a volume mount data is
ephemeral; ensure the same terminology is used and the examples reference the
exact flag "-v rcs-data:/app/data" so readers can locate and reconcile the two
behaviors.
---
Duplicate comments:
In `@packages/remote-control-server/src/index.ts`:
- Around line 60-63: The v2 handlers are mounted under the v1 path; change the
routing so v2CodeSessions, v2Worker, v2WorkerEventsStream, and v2WorkerEvents
are mounted under a v2 prefix (e.g., "/v2/code/sessions" or an app.route("/v2",
...) grouping) instead of "/v1/code/sessions" to avoid exposing the wrong API
version; update the route strings or wrap the handlers in a v2 router that is
mounted at "/v2" so all four symbols are served from the correct versioned path.
- Around line 94-101: The shutdown currently calls closeAllConnections()
synchronously and exits immediately; change closeAllConnections (exported from
ws-handler.ts) to return a Promise<void> that resolves when all WebSocket close
frames and in-flight responses are fully flushed, then update gracefulShutdown
in index.ts to await that Promise before calling process.exit(0); keep
gracefulShutdown async, wire process.on("SIGINT"/"SIGTERM") to call it and
handle rejections (e.g., .catch or top-level try/catch) and optionally add a
bounded timeout fallback to force-exit if teardown hangs.
- Around line 8-9: The file currently imports Node-specific helpers (dirname,
resolve from "node:path" and fileURLToPath from "node:url") and installs process
signal handlers via process.on; replace these with Bun-native patterns: import
path helpers from "path" (or use Bun.path) instead of "node:path", compute the
file directory using import.meta.url + URL (avoid fileURLToPath and the
fileURLToPath symbol), and swap any process.on('SIGINT'| 'SIGTERM', ...) usages
to Bun.on('SIGINT', ...) / Bun.on('SIGTERM', ...). Update any process.exit calls
to use Bun.exit if present. Ensure you update references to dirname/resolve
usage to work with the new directory calculation and Bun-compatible path import.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a23028e8-b865-4033-a7a6-37953ce002ae
📒 Files selected for processing (5)
.github/workflows/release-rcs.ymlpackages/remote-control-server/Dockerfilepackages/remote-control-server/README.mdpackages/remote-control-server/src/config.tspackages/remote-control-server/src/index.ts
✅ Files skipped from review due to trivial changes (1)
- packages/remote-control-server/Dockerfile
🚧 Files skipped from review as they are similar to previous changes (2)
- .github/workflows/release-rcs.yml
- packages/remote-control-server/src/config.ts
| -v rcs-data:/app/data \ | ||
| ghcr.io/claude-code-best/remote-control-server:latest |
There was a problem hiding this comment.
Clarify/remove persistent volume examples for an in-memory server.
Line [22]-[23] and Line [95]-[99] suggest persistent storage, but Line [149] says data is in-memory and cleared on restart. This is confusing for deployment expectations.
Also applies to: 95-99, 149-149
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/remote-control-server/README.md` around lines 22 - 23, The README
currently contradicts itself by showing a persistent volume mount "-v
rcs-data:/app/data" in the Docker run examples while later stating the server is
in-memory and data is cleared on restart; update the sections referencing the
volume examples (the Docker run snippet containing "-v rcs-data:/app/data" and
the block at lines ~95-99) to either remove the volume examples or explicitly
state that mounting a host volume will persist data across restarts and is
optional, and modify the later note (around the in-memory data statement at
~149) to clarify that without a volume mount data is ephemeral; ensure the same
terminology is used and the examples reference the exact flag "-v
rcs-data:/app/data" so readers can locate and reconcile the two behaviors.
| ``` | ||
| rcs.example.com { | ||
| reverse_proxy localhost:3000 | ||
| } | ||
| ``` |
There was a problem hiding this comment.
Add language identifiers to fenced code blocks (MD040).
Line [125] and Line [133] use unlabeled fences; markdownlint flags this and it hurts renderer/tooling behavior.
Suggested patch
-```
+```caddyfile
rcs.example.com {
reverse_proxy localhost:3000
}@@
- +text
┌─────────────┐ WebSocket/SSE ┌──────────────────┐
│ Claude Code │ ◄──────────────────► │ Remote Control │
│ (Bridge CLI)│ HTTP API │ Server │
...
└──────────────────┘
Also applies to: 133-146
🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 125-125: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/remote-control-server/README.md` around lines 125 - 129, Add
explicit language identifiers to the unlabeled fenced code blocks in README.md:
annotate the Caddy snippet fence (the block containing "rcs.example.com {
reverse_proxy localhost:3000 }") with ```caddyfile and annotate the ASCII
diagram block (the multi-line box/diagram) with a neutral label such as ```text
(also apply the same change to the similar fences mentioned for lines 133-146)
so markdownlint MD040 warnings are resolved.
|
已经收到,谢谢
|
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/remote-control-server/Dockerfile`:
- Around line 15-32: The runtime Dockerfile currently runs as root; update the
runtime stage to drop privileges by switching to the existing non-root bun user
(add USER bun in the runtime stage after files are copied) and ensure the copied
artifacts (dist/server.js and web) and the /app directory are
readable/executable by that user (adjust ownership or use chown during COPY or
an explicit chown RUN before switching users); keep the HEALTHCHECK and CMD
as-is but ensure they run under USER bun so the container runs with least
privilege.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0377db7d-34b7-4d01-a6ac-fcacaeb0d4a9
📒 Files selected for processing (1)
packages/remote-control-server/Dockerfile
| FROM oven/bun:1-slim AS runtime | ||
|
|
||
| ARG VERSION=0.1.0 | ||
| ENV RCS_VERSION=${VERSION} | ||
|
|
||
| WORKDIR /app | ||
|
|
||
| COPY --from=builder /app/dist/server.js ./dist/server.js | ||
| COPY packages/remote-control-server/web ./web | ||
|
|
||
| VOLUME /app/data | ||
|
|
||
| EXPOSE 3000 | ||
|
|
||
| HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \ | ||
| CMD bun run -e "fetch('http://localhost:3000/health').then(r => r.ok ? process.exit(0) : process.exit(1)).catch(() => process.exit(1))" | ||
|
|
||
| CMD ["bun", "run", "dist/server.js"] |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In the official oven/bun:1-slimimage, isbun a valid non-root user, and what is the recommended least-privilege pattern for production containers?
💡 Result:
Yes. In the oven/bun slim images, bun is a real non-root account created in the image (e.g., groupadd bun --gid 1000 && useradd bun --uid 1000 ... --create-home), with the working directory set under /home/bun/app. [1] Bun’s own Docker guide also uses USER bun in the final (“release”) stage, implying this user is intended to be used for runtime. [2]
Recommended least-privilege pattern for production containers
- Use a multi-stage build: install/build in intermediate stages, and copy only the built artifacts + production dependencies into the final image. [2]
- In the final runtime stage, run as non-root (e.g.,
USER bun, or your own dedicated runtime user) and ensure files are owned appropriately (typically viaCOPY --chown=...). [2] - Prefer a smaller, more locked-down runtime base where possible (Docker also points to Bun “Docker Hardened Images” as a production-ready option). [3]
Sources: [1] [2] [3]
Run runtime stage as non-root user
The runtime image does not set a non-root USER, which is a container hardening gap matching the static finding (DS-0002). The oven/bun:1-slim base image includes the bun non-root user, and using USER bun is the recommended least-privilege pattern for production.
Proposed fix
FROM oven/bun:1-slim AS runtime
WORKDIR /app
COPY --from=builder /app/dist/server.js ./dist/server.js
COPY packages/remote-control-server/web ./web
+RUN mkdir -p /app/data && chown -R bun:bun /app
VOLUME /app/data
+USER bun🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/remote-control-server/Dockerfile` around lines 15 - 32, The runtime
Dockerfile currently runs as root; update the runtime stage to drop privileges
by switching to the existing non-root bun user (add USER bun in the runtime
stage after files are copied) and ensure the copied artifacts (dist/server.js
and web) and the /app directory are readable/executable by that user (adjust
ownership or use chown during COPY or an explicit chown RUN before switching
users); keep the HEALTHCHECK and CMD as-is but ensure they run under USER bun so
the container runs with least privilege.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
packages/remote-control-server/web/api.js (1)
63-85: Encode dynamic path segments before composing URLs.
id/sessionIdare interpolated directly into route paths. Encoding them hardens against malformed identifiers containing reserved URL characters.Proposed fix
export function apiFetchSession(id) { - return api("GET", `/web/sessions/${id}`); + return api("GET", `/web/sessions/${encodeURIComponent(id)}`); } export function apiFetchSessionHistory(id) { - return api("GET", `/web/sessions/${id}/history`); + return api("GET", `/web/sessions/${encodeURIComponent(id)}/history`); } export function apiSendEvent(sessionId, body) { - return api("POST", `/web/sessions/${sessionId}/events`, body); + return api("POST", `/web/sessions/${encodeURIComponent(sessionId)}/events`, body); } export function apiSendControl(sessionId, body) { - return api("POST", `/web/sessions/${sessionId}/control`, body); + return api("POST", `/web/sessions/${encodeURIComponent(sessionId)}/control`, body); } export function apiInterrupt(sessionId) { - return api("POST", `/web/sessions/${sessionId}/interrupt`); + return api("POST", `/web/sessions/${encodeURIComponent(sessionId)}/interrupt`); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/web/api.js` around lines 63 - 85, The dynamic path segments in apiFetchSession, apiFetchSessionHistory, apiSendEvent, apiSendControl, and apiInterrupt are interpolated directly and must be URL-encoded; update each function to encode the id/sessionId (e.g., use encodeURIComponent(String(id)) or encodeURIComponent(String(sessionId)) when building the template literal path) so reserved characters are escaped before calling api.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/remote-control-server/web/api.js`:
- Around line 41-47: The code unconditionally calls res.json() which can throw
SyntaxError for empty/non-JSON responses and bypass the structured error
handling; wrap the JSON parse in a try/catch or first check response
headers/body (e.g., Content-Type includes "application/json" or Content-Length >
0) before calling res.json(), assign a safe fallback like null or {} to data
when parsing fails, and then use the existing res.ok path (the const res, data,
fetch(url, opts) block) to build the error (err = data?.error || { type:
"unknown", message: res.statusText }) so non-JSON and empty responses are
handled without uncaught exceptions.
- Around line 7-14: The generateUuid function calls crypto.getRandomValues
without ensuring it exists; update generateUuid to check that crypto and
crypto.getRandomValues are available (e.g., typeof crypto.getRandomValues ===
"function") before using it, and if not present fall back to a non-cryptographic
UUID generator (e.g., use Math.random-based replacement of the UUID template) or
a safe alternative; reference the generateUuid function and the
crypto.getRandomValues call so the guard and fallback are implemented in the
existing fallback branch.
---
Nitpick comments:
In `@packages/remote-control-server/web/api.js`:
- Around line 63-85: The dynamic path segments in apiFetchSession,
apiFetchSessionHistory, apiSendEvent, apiSendControl, and apiInterrupt are
interpolated directly and must be URL-encoded; update each function to encode
the id/sessionId (e.g., use encodeURIComponent(String(id)) or
encodeURIComponent(String(sessionId)) when building the template literal path)
so reserved characters are escaped before calling api.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3ed50ea8-19c0-4974-8542-4d2a87db2b54
📒 Files selected for processing (1)
packages/remote-control-server/web/api.js
| function generateUuid() { | ||
| if (typeof crypto !== "undefined" && typeof crypto.randomUUID === "function") { | ||
| return crypto.randomUUID(); | ||
| } | ||
| // Fallback for non-secure contexts (HTTP without localhost) | ||
| return "10000000-1000-4000-8000-100000000000".replace(/[018]/g, (c) => | ||
| (c ^ (crypto.getRandomValues(new Uint8Array(1))[0] & (15 >> (c / 4)))).toString(16), | ||
| ); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Is crypto.getRandomValuesguaranteed to exist in environments wherecrypto.randomUUID is unavailable (including insecure HTTP pages and embedded webviews)?
💡 Result:
No.
crypto.randomUUID()being unavailable does not implycrypto.getRandomValues()exists.randomUUID()can be missing simply because the engine is older (it shipped much later thangetRandomValues()), or because you’re not in a secure context. (developer.mozilla.org)- On insecure HTTP pages, the relationship is actually the opposite: MDN notes that
getRandomValues()is the onlyCryptomethod usable from an insecure context, whilerandomUUID()is available only in secure contexts (HTTPS). (developer.mozilla.org) - In embedded webviews, there’s no universal guarantee either—availability depends on the embedded browser engine/version (so you still need runtime feature-detection). (chromium.googlesource.com)
Practical takeaway: use feature detection (e.g., if (globalThis.crypto?.getRandomValues) … else …), not assumptions based on randomUUID() presence/absence. (developer.mozilla.org)
Citations:
- 1: https://developer.mozilla.org/en-US/docs/Web/API/Crypto/randomUUID
- 2: https://developer.mozilla.org/en-US/docs/Web/API/Crypto/getRandomValues
- 3: https://chromium.googlesource.com/external/github.com/mdn/browser-compat-data/%2B/refs/tags/v6.0.12/api/Crypto.json
- 4: https://developer.mozilla.org/en-US/docs/Web/API/Crypto/getRandomValues
Guard crypto.getRandomValues in the UUID fallback path.
Line 13 uses crypto.getRandomValues(...) without checking it exists. getRandomValues is not guaranteed to be available even when randomUUID is missing—availability depends on the runtime (insecure HTTP contexts, embedded webviews, or older engines may lack it entirely).
Proposed fix
function generateUuid() {
- if (typeof crypto !== "undefined" && typeof crypto.randomUUID === "function") {
- return crypto.randomUUID();
+ const c = globalThis.crypto;
+ if (c && typeof c.randomUUID === "function") {
+ return c.randomUUID();
}
- // Fallback for non-secure contexts (HTTP without localhost)
- return "10000000-1000-4000-8000-100000000000".replace(/[018]/g, (c) =>
- (c ^ (crypto.getRandomValues(new Uint8Array(1))[0] & (15 >> (c / 4)))).toString(16),
- );
+ if (c && typeof c.getRandomValues === "function") {
+ return "10000000-1000-4000-8000-100000000000".replace(/[018]/g, (ch) =>
+ (ch ^ (c.getRandomValues(new Uint8Array(1))[0] & (15 >> (ch / 4)))).toString(16),
+ );
+ }
+ // Last-resort fallback to avoid hard crash in constrained runtimes
+ return `${Date.now().toString(16)}-${Math.random().toString(16).slice(2, 10)}-4${Math.random().toString(16).slice(2, 5)}-8${Math.random().toString(16).slice(2, 5)}-${Math.random().toString(16).slice(2, 14)}`;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function generateUuid() { | |
| if (typeof crypto !== "undefined" && typeof crypto.randomUUID === "function") { | |
| return crypto.randomUUID(); | |
| } | |
| // Fallback for non-secure contexts (HTTP without localhost) | |
| return "10000000-1000-4000-8000-100000000000".replace(/[018]/g, (c) => | |
| (c ^ (crypto.getRandomValues(new Uint8Array(1))[0] & (15 >> (c / 4)))).toString(16), | |
| ); | |
| function generateUuid() { | |
| const c = globalThis.crypto; | |
| if (c && typeof c.randomUUID === "function") { | |
| return c.randomUUID(); | |
| } | |
| if (c && typeof c.getRandomValues === "function") { | |
| return "10000000-1000-4000-8000-100000000000".replace(/[018]/g, (ch) => | |
| (ch ^ (c.getRandomValues(new Uint8Array(1))[0] & (15 >> (ch / 4)))).toString(16), | |
| ); | |
| } | |
| // Last-resort fallback to avoid hard crash in constrained runtimes | |
| return `${Date.now().toString(16)}-${Math.random().toString(16).slice(2, 10)}-4${Math.random().toString(16).slice(2, 5)}-8${Math.random().toString(16).slice(2, 5)}-${Math.random().toString(16).slice(2, 14)}`; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/remote-control-server/web/api.js` around lines 7 - 14, The
generateUuid function calls crypto.getRandomValues without ensuring it exists;
update generateUuid to check that crypto and crypto.getRandomValues are
available (e.g., typeof crypto.getRandomValues === "function") before using it,
and if not present fall back to a non-cryptographic UUID generator (e.g., use
Math.random-based replacement of the UUID template) or a safe alternative;
reference the generateUuid function and the crypto.getRandomValues call so the
guard and fallback are implemented in the existing fallback branch.
| const res = await fetch(url, opts); | ||
| const data = await res.json(); | ||
|
|
||
| if (!res.ok) { | ||
| const err = data.error || { type: "unknown", message: res.statusText }; | ||
| throw new Error(err.message || err.type); | ||
| } |
There was a problem hiding this comment.
Avoid unconditional res.json() parsing for all responses.
Line 42 can throw SyntaxError for empty/non-JSON responses (e.g., proxy errors), which bypasses your structured error path at Line 44-46.
Proposed fix
- const res = await fetch(url, opts);
- const data = await res.json();
+ const res = await fetch(url, opts);
+ const raw = await res.text();
+ let data = {};
+ if (raw) {
+ try {
+ data = JSON.parse(raw);
+ } catch {
+ data = { error: { type: "invalid_json", message: raw.slice(0, 200) } };
+ }
+ }
if (!res.ok) {
const err = data.error || { type: "unknown", message: res.statusText };
throw new Error(err.message || err.type);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const res = await fetch(url, opts); | |
| const data = await res.json(); | |
| if (!res.ok) { | |
| const err = data.error || { type: "unknown", message: res.statusText }; | |
| throw new Error(err.message || err.type); | |
| } | |
| const res = await fetch(url, opts); | |
| const raw = await res.text(); | |
| let data = {}; | |
| if (raw) { | |
| try { | |
| data = JSON.parse(raw); | |
| } catch { | |
| data = { error: { type: "invalid_json", message: raw.slice(0, 200) } }; | |
| } | |
| } | |
| if (!res.ok) { | |
| const err = data.error || { type: "unknown", message: res.statusText }; | |
| throw new Error(err.message || err.type); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/remote-control-server/web/api.js` around lines 41 - 47, The code
unconditionally calls res.json() which can throw SyntaxError for empty/non-JSON
responses and bypass the structured error handling; wrap the JSON parse in a
try/catch or first check response headers/body (e.g., Content-Type includes
"application/json" or Content-Length > 0) before calling res.json(), assign a
safe fallback like null or {} to data when parsing fails, and then use the
existing res.ok path (the const res, data, fetch(url, opts) block) to build the
error (err = data?.error || { type: "unknown", message: res.statusText }) so
non-JSON and empty responses are handled without uncaught exceptions.
Summary by CodeRabbit
New Features
Enhancements