Conversation
… start reference (raphaeltm#713) - Restructure 3-column permissions table to 4 columns (Scope, Category, Permission, Access Level) matching the Cloudflare token creation UI - Add explanatory text above table describing how to read the columns - Replace duplicated (now stale) permission list in Quick Start with a link to the detailed table, avoiding two places to maintain Co-authored-by: Raphaël Titsworth-Morin <raphael@raphaeltm.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…m#714) - Explain BYOC model upfront so readers know they don't need a cloud provider account - Clarify Go is needed to compile the VM agent binary (not for writing Go) - Make "domain configured in Cloudflare" prerequisite less vague with link to setup section - Make GH_ vs GITHUB_ naming convention more prominent — this is a common source of deployment confusion - Rename "Building & Deployment" to "Manual Building & Deployment (Optional)" so readers know to skip it if using Quick Start - Update "Last updated" to ISO date format Co-authored-by: Raphaël Titsworth-Morin <raphael@raphaeltm.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
raphaeltm#725) Cloudflare Pages subdomains are globally unique across all accounts. When a fork creates a project named "sam-web-prod" but that subdomain is already taken (by the upstream account), CF assigns a suffix like "sam-web-prod-eui". The DNS CNAME was using the computed name (`${prefix}-web-${stack}.pages.dev`) which resolved to the upstream account's Pages project, not the fork's. This caused app.defanglabs.ca to serve our production bundle (with api.simple-agent-manager.org hardcoded), making the login button fail with CORS errors — the app was talking to the wrong API. Fix: use `pagesProject.subdomain` (the actual CF-assigned subdomain) instead of the computed name. Co-authored-by: Raphaël Titsworth-Morin <raphael@raphaeltm.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…aphaeltm#726) * fix: pass Origin CA Key to Pulumi for OriginCaCertificate creation The Cloudflare Origin CA API requires the dedicated Origin CA Key (CLOUDFLARE_API_USER_SERVICE_KEY), not a regular API token. Without it, Pulumi fails with error 1016 "User is not authorized to perform this action". Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add CF_ORIGIN_CA_KEY to required secrets in self-hosting guide Documents the new required secret added in the previous commit. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: make CF_ORIGIN_CA_KEY optional — Pulumi should handle Origin CA with regular API token The Pulumi Cloudflare provider (v3.32.0+, we're on v5.49.1) supports all auth schemes for Origin CA certificates. The dedicated Origin CA Key (CLOUDFLARE_API_USER_SERVICE_KEY) should only be needed as a fallback if the regular CF_API_TOKEN fails with error 1016. - Change deploy validation from hard failure to warning when CF_ORIGIN_CA_KEY is missing - Move CF_ORIGIN_CA_KEY to optional secrets section in self-hosting docs - Keep the env var pass-through in Pulumi steps (empty string = ignored) Co-Authored-By: lionello <lionello@users.noreply.github.com> Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Lionello Lunesu <lio+git@lunesu.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: Raphaël Titsworth-Morin <raphael@raphaeltm.com> Co-authored-by: lionello <lionello@users.noreply.github.com>
raphaeltm#727) The CF API token just needs Zone > SSL and Certificates > Edit to create Origin CA certificates. The dedicated Origin CA Key (CF_ORIGIN_CA_KEY) is a deprecated Cloudflare credential (removal Sept 2026) and should not be the primary recommendation. - Add SSL and Certificates permission to the required permissions table - Downgrade CF_ORIGIN_CA_KEY to deprecated fallback in docs - Update deploy warning to suggest the permission fix, not the separate key Co-authored-by: Raphaël Titsworth-Morin <raphael@raphaeltm.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Daily devlog covering April 15 development: 3 architectural pivots for the AI proxy (AI Gateway → Workers AI binding → non-streaming with SSE wrapping), a taxonomy of model-specific failure modes (thinking tags, control token leaks, silent hangs, surprise tool call formats), and Origin CA permission fixes. Co-authored-by: Raphaël Titsworth-Morin <raphael@raphaeltm.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ltm#731) * chore: move task to active Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat: add resource diagnostics for workspace build timeouts When a workspace build times out, collect CPU/memory/disk metrics via sysinfo.CollectQuick() and append actionable diagnostics to the error message. If resources are constrained (CPU load > 2x per core, memory > 90%, or disk > 90%), suggest using a larger VM size. The enriched message flows through the existing errorMessage pipeline to the UI. Resource diagnostics are also included in the node event detail map for observability. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: update task checklist — all items complete Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * test: add node event integration test for resource diagnostics Verify that the detail map pattern used in startWorkspaceProvision correctly includes resourceDiagnostics for timeout errors and omits it for non-timeout errors. Addresses task-completion-validator HIGH finding about untested observability path. Also fix duplicate checklist item in task file. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: archive completed task Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: make diagnostic thresholds configurable via env vars Address constitution Principle XI violation: CPU saturation, memory exhaustion, and disk full thresholds are now configurable via DIAG_CPU_SATURATION_THRESHOLD (default: 2.0), DIAG_MEM_EXHAUSTED_THRESHOLD (default: 90), DIAG_DISK_FULL_THRESHOLD (default: 90). Add getEnvFloat helper to config package. Add custom threshold test and additional gap-coverage tests from test-engineer review. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor: use constructor injection for sysinfo test stubs Replace post-construction SetReadFileFunc/SetStatFSFunc setters with ReadFileFunc/StatFSFunc fields on CollectorConfig. This eliminates the race-detector-visible mutation of function fields after construction, addressing the go-specialist HIGH finding. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Raphaël Titsworth-Morin <raphael@raphaeltm.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…#732) AI_PROXY_DEFAULT_MODEL was manually set as a Worker secret on staging, overriding code defaults and causing stale model selection. These are configuration values, not secrets — move them to wrangler.toml [vars] and add stale secret cleanup to configure-secrets.sh so old secrets don't shadow the new vars on deploy. Co-authored-by: SAM <sam@simple-agent-manager.org> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…eltm#734) * feat: improve knowledge graph retrieval and agent instructions Replace keyword-based FTS5 retrieval with high-confidence retrieval that returns ALL observations above a confidence threshold (default 0.8). This ensures agents see user preferences and project context regardless of whether the task title happens to contain matching keywords. Key changes: - Add getAllHighConfidenceKnowledge() to knowledge module, DO, and service layer - Format retrieved knowledge as readable directives grouped by entity - Replace passive instructions with mandatory behavioral triggers for when to save and when to search knowledge - Add decision-point retrieval instructions (search before content, UI, architecture, and business decisions) - Add bootstrapping prompt for empty knowledge graphs directing agents to search past conversations - Differentiate conversation mode with more aggressive capture instructions - New configurable env vars: KNOWLEDGE_AUTO_RETRIEVE_MIN_CONFIDENCE (0.8), KNOWLEDGE_AUTO_RETRIEVE_HIGH_CONFIDENCE_LIMIT (50) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: trigger CI with updated PR body Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Raphaël Titsworth-Morin <raphael@raphaeltm.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Daily devlog covering cloud-init boot ordering race condition, ghost Docker image pre-pulls, timeout cascades, and the SQLite diagnostic tooling built to find them all. Co-authored-by: Raphaël Titsworth-Morin <raphael@raphaeltm.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…ar (raphaeltm#733) * fix: OpenCode config requires npm + models keys for custom providers OpenCode's model resolver splits model names on "/" to find the provider. Custom (non-built-in) providers like our "platform" AI proxy need: - "npm": "@ai-sdk/openai-compatible" to specify the SDK package - "models": { "alias": { "name": "..." } } to register model IDs - model field formatted as "providerID/modelAlias" Without these, OpenCode throws ProviderModelNotFoundError and falls back to its default (BigPickle), which is why agents reported using BigPickle instead of our configured Workers AI models. Built-in providers (scaleway, anthropic) already have pre-registered models and don't need these keys. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * remove: Neko browser sidecar feature Remove the entire Neko remote browser sidecar feature to improve VM startup performance. This removes: - VM agent browser package (manager, container, chrome config, socat, etc.) - Browser HTTP handlers and route registrations - Cloud-init Neko image pre-pull - API browser proxy routes (project and workspace level) - Sidecar alias subdomain routing (ws-{id}--browser) - Shared types (BrowserSidecarStatus, StartBrowserSidecarRequest, etc.) - SIDECAR_ALIASES/SidecarAlias/isSidecarAlias from shared package - BrowserSidecar component, useBrowserSidecar hook - Browser API client functions - Neko env vars (NEKO_IMAGE, NEKO_PRE_PULL, BROWSER_PROXY_TIMEOUT_MS) - All related tests and backlog tasks Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor: replace Workers AI binding with AI Gateway pass-through Switch the AI proxy from using env.AI.run() (Workers AI binding) to forwarding requests transparently to Cloudflare AI Gateway. This enables full OpenAI-compatible features including tools/function calling and native streaming without format translation. The Gateway endpoint at gateway.ai.cloudflare.com provides an OpenAI-compatible interface that handles all format concerns natively. The proxy now only handles SAM-specific auth, rate limiting, and token budgets before forwarding. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * test: add temporary unauthenticated AI Gateway test endpoint PROTOTYPE ONLY — must be removed before merging. Adds POST /ai/v1/test/chat/completions that skips auth/rate-limiting to allow direct testing of the AI Gateway integration. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: fall back to Workers AI REST API when gateway not configured The AI Gateway needs to be created in the Cloudflare dashboard first. Until then, use the Workers AI OpenAI-compatible REST API directly as a fallback. Both endpoints accept the same OpenAI format. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: remove unused DEFAULT_AI_GATEWAY_ID constant Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat: enable AI Gateway with per-user metadata tracking - Add cf-aig-metadata header with userId, workspaceId, modelId to every AI proxy request — enables per-user token usage tracking in Gateway logs - Automate AI Gateway creation in deploy pipeline (configure-ai-gateway.sh) - Set AI_GATEWAY_ID dynamically via sync-wrangler-config.ts - Remove temporary unauthenticated /test/chat/completions endpoint - Capture cf-aig-log-id from Gateway responses for debug correlation Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: add required fields to AI Gateway creation request The Cloudflare API requires collect_logs, cache_ttl, cache_invalidate_on_update, rate_limiting_interval, and rate_limiting_limit in the gateway creation body. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat: add SQLite-backed event store + resource monitor with download Add persistent SQLite storage for VM agent events (replacing ephemeral in-memory slices) and 1-minute resource snapshots (CPU, memory, disk). Both databases are downloadable via the node page UI for post-hoc debugging of workspace startup times and resource contention. - eventstore package: SQLite with WAL mode, 7-day retention trim - resourcemon package: 1-min interval snapshots from /proc + statfs - VM agent export endpoints: GET /events/export, GET /metrics/export - API proxy routes: GET /nodes/:id/events/export, /metrics/export - UI: download buttons in NodeEventsSection header Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: checkpoint WAL before serving SQLite database downloads WAL mode keeps recent writes in a separate -wal file. Without a checkpoint, the main .db file is empty/stale when downloaded. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: reorder cloud-init boot sequence — vm-agent starts last The vm-agent was starting BEFORE Node.js/devcontainer CLI install and BEFORE Docker restart. This caused two problems: 1. waitForCommand("devcontainer") stalls until CLI is installed (~minutes) 2. systemctl restart docker kills running containers, aborting any devcontainer build the agent started, wasting the entire first cycle New order: Docker start → firewall → Node.js/CLI install → journald → Docker restart → metadata block → TLS → download vm-agent → start vm-agent Also removes Requires=docker.service from the systemd unit since Docker is already running and stable by the time the agent starts. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: add boot timing instrumentation and base image pre-pull Adds logger markers for each cloud-init phase (sam-boot tag) so we can see exactly where time is spent during node provisioning via journald. Also pre-pulls mcr.microsoft.com/devcontainers/base:ubuntu during cloud-init (after Docker restart, before vm-agent start). This caches the ~270MB base image so the first workspace build doesn't wait for it. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: increase agent ready timeout to 15min, parallelize image pre-pull Root cause of task failures: cloud-init now takes 8-12 minutes to complete (packages, Docker, firewall, Node.js, CLI, Docker restart, image pre-pull). The old 10-minute timeout expired before the agent could start. Changes: - Increase DEFAULT_TASK_RUNNER_AGENT_READY_TIMEOUT_MS from 600s to 900s - Increase DEFAULT_NODE_AGENT_READY_TIMEOUT_MS to match - Run base image pre-pull in background concurrent with Node.js/CLI install (saves 3-5 minutes) - Wait for background pull before Docker restart (restart kills pulls) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: increase stuck-queued timeout to 20min to avoid race with agent-ready The stuck-task cron (DEFAULT_TASK_STUCK_QUEUED_TIMEOUT_MS = 10 min) was killing tasks that were legitimately waiting for cloud-init to finish. Since cloud-init takes 8-12 min and the agent-ready timeout is 15 min, the cron was racing and winning — failing tasks before the agent could ever start. Increase to 20 min (5 min buffer above agent-ready timeout). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: move vm-agent start back to early boot (after Docker + firewall) The vm-agent was moved to the LAST step in cloud-init, meaning it couldn't heartbeat until ALL steps completed (Node.js install, CLI install, image pull, Docker restart — 8-12+ min total). This caused every task to timeout. The agent only needs Docker running + firewall configured to start. Node.js, devcontainer CLI, image pulls, and Docker restart are needed for workspace creation, not for the agent itself. The agent's bootstrap code already polls for `devcontainer` CLI availability. New boot order: Phase 1 (critical path ~2-3 min): Docker → firewall → TLS → vm-agent Phase 2 (after agent up): Node.js → CLI → image pull → Docker restart This restores the fast-heartbeat pattern from before Mar 21 while keeping all the security features (firewall, metadata block, TLS hardening). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat: move system provisioning from cloud-init into vm-agent Cloud-init now only does: Docker start → vm-agent download → vm-agent start. All other provisioning (firewall, Node.js, devcontainer CLI, image pulls, Docker restart, metadata block) is handled by the new provision package inside the vm-agent. This means the agent heartbeats within ~60s of boot instead of 8-12 min. Every provisioning step is logged to the SQLite eventstore, making it downloadable for debugging via /events/export. The provision package runs synchronously in main() after server.Start() (so /health is already responding) but before bootstrap.Run() (so devcontainer CLI is installed before workspace creation begins). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: strip cloud-init to bare minimum — only vm-agent download+start Cloud-init now does ONLY: 1. Create workspace user 2. Write config files to disk (TLS certs, firewall scripts, etc.) 3. Download vm-agent binary 4. Start vm-agent via systemd REMOVED from cloud-init: - packages: section (docker.io, git, curl, etc.) — this was blocking runcmd for 5-10 minutes while apt-get ran - Docker start/enable — moved to vm-agent provision - All firewall/TLS/metadata runcmd steps — already in vm-agent The vm-agent's provision package now also handles: - Docker installation (apt-get install docker.io) - Basic package installation (git, curl, jq, etc.) - Everything else (firewall, Node.js, devcontainer CLI, etc.) This means the vm-agent starts and heartbeats within seconds of cloud-init beginning, instead of waiting 5-10 min for apt-get. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: move systemd unit to write_files — heredoc never terminated in runcmd The vm-agent systemd unit file was created via a bash heredoc (cat << 'UNIT') inside a cloud-init YAML block scalar (- |). The YAML indentation added 4 leading spaces to every line including the closing UNIT delimiter. Bash's << operator requires the closing delimiter at column 0 — with leading spaces it was never recognized as a terminator. Result: the heredoc consumed all remaining runcmd lines as content. The systemctl daemon-reload, enable, and start commands never executed. The vm-agent was never started on any VM. Fix: move the unit file to write_files (which correctly strips YAML block indentation) and simplify runcmd to just systemctl commands. Regression test added to verify the unit file is in write_files, has no leading whitespace on section headers, and runcmd contains no heredocs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: disable cloud-init apt-get update, add boot logging Cloud-init runs apt-get update by default before runcmd even without a packages section. This blocks the vm-agent start for 5-10 minutes. Added package_update: false and package_upgrade: false to skip default apt operations — the vm-agent handles all package installs itself. Also added detailed logger calls in runcmd to trace download and service start progress via syslog. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * debug: add temporary SSH key for VM inspection TEMPORARY — will be removed after debugging. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: quote runcmd logger lines — YAML colons parsed as mappings Cloud-init's YAML parser treats `- logger -t sam-boot "PHASE START: x"` as a mapping ({key: value}) because of the colon-space inside the string. This crashes the entire runcmd module with: TypeError: Unable to shellify type 'dict' Fix: single-quote all runcmd entries that contain colons. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: remove debug SSH key from cloud-init template Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: update tests for 900s agent timeout, split nodes.ts, fix lint - Update test expectations from 600_000 to 900_000 to match the DEFAULT_NODE_AGENT_READY_TIMEOUT_MS change (15 min for cloud-init) - Split node lifecycle callbacks (ready, heartbeat, errors) into node-lifecycle.ts to bring nodes.ts under 500-line limit - Fix import sort lint error in NodeEventsSection.tsx Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: send node-ready callback after provisioning, not at server start The node-ready callback was firing inside startNodeHealthReporter() when the HTTP server starts — before system provisioning installs Docker and Node.js. This caused the control plane to dispatch workspace creation immediately, which failed with "docker: executable file not found". Move sendNodeReady() out of the health reporter goroutine and call it explicitly from main.go after provision.Run() completes. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: update tests to reference node-lifecycle.ts after route split The branch moved error reporting and callback endpoints from nodes.ts to node-lifecycle.ts. Four test files still referenced the old path. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: trigger CI re-run for preflight evidence check --------- Co-authored-by: Raphaël Titsworth-Morin <raphael@raphaeltm.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…container (raphaeltm#738) The docker-in-docker:2 feature was causing lightweight container builds to fail because it runs apt-get install during the Docker build step, which depends on network connectivity to archive.ubuntu.com. When that connection times out (common on Hetzner VMs), the entire lightweight container fails to start. Replace with "privileged": true in the default devcontainer config. This gives the container the kernel access needed to install and run Docker on-demand after boot, without any network dependency during build time. Users/agents can install Docker when they need it via: curl -fsSL https://get.docker.com | sh && dockerd & This keeps the lightweight container fast (~20s boot) while preserving the ability to run Docker workloads. Co-authored-by: Raphaël Titsworth-Morin <raphael@raphaeltm.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add node debug package download (tar.gz of all logs, metrics, events) New GET /debug-package endpoint on the VM agent that assembles a tar.gz archive containing all diagnostic data: cloud-init logs, journald logs, VM agent service logs, Docker container logs, system info snapshot, events DB, metrics DB, boot events, dmesg, syslog, firewall rules, network config, disk usage, and process list. Proxied through the API Worker at GET /api/nodes/:id/debug-package and exposed in the UI as a "Debug Package" button on the node events section. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * ci: retrigger checks after PR body update --------- Co-authored-by: Raphaël Titsworth-Morin <raphael@raphaeltm.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore: move task to active Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat: add recent chats dropdown to nav bar Adds a message bubble icon to the mobile header (between search and notifications) and desktop sidebar header that opens a dropdown showing recently active chat sessions across all projects. Enables quick 2-tap switching between active conversations on mobile (down from 3-4 taps). - useRecentChats hook with visibility-aware polling (30s interval) - Portal-based dropdown following NotificationCenter pattern - Shows state dot, topic, project name, relative time per session - Active count badge on the icon - Empty/loading/error states - Click-outside, Escape, and navigation-close behavior - Playwright visual audit tests (mobile + desktop) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: desktop dropdown positioning and playwright test fixes - Use left-aligned positioning for desktop dropdown instead of right-aligned - Fix auth mock to intercept /api/auth/get-session (BetterAuth endpoint) - Scope all test assertions to dialog locator to avoid Chats page conflicts - Fix error state mock to fail projects endpoint (not just sessions) - Change test navigation to /chats (authenticated route) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: address UI/UX review findings for recent chats dropdown - Increase trigger button touch target from 36px to 44px (w-11 h-11) - Add aria-expanded and aria-haspopup to trigger button - Change role="dialog" to role="menu" with role="menuitem" on items - Add focus-visible styling to Retry and View All buttons - Add min-h-[44px] to View All footer for touch target compliance - Add viewport edge guard for desktop panel positioning - Add click-outside close test and badge count assertion Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: move task to archive Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: trigger CI re-run for updated PR body Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: trigger CI with updated preflight evidence Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Raphaël Titsworth-Morin <raphael@raphaeltm.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Raphaël Titsworth-Morin <raphael@raphaeltm.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.