diff --git a/.claude/CLAUDE.md b/.claude/CLAUDE.md new file mode 100644 index 0000000..7b89761 --- /dev/null +++ b/.claude/CLAUDE.md @@ -0,0 +1,106 @@ +# CLAUDE.md + +See @README.md for project overview. + +> **Product:** Ckipper — multi-account Claude Code manager with Docker isolation. +> **Repo:** Single-package zsh project. Top-level layout in §1. + +## Quick Reference + +**Core Rules:** +- @.claude/rules/code-style.md — Naming, complexity limits, documentation +- @.claude/rules/testing.md — Test structure, what to test, quality gates +- @.claude/rules/security.md — Security requirements +- @.claude/rules/file-organization.md — Directory structure, file caps, dependency direction + +**Package Rules:** *(add package-specific rule files as needed)* + + +## How to Use These Instructions + +1. **Always follow** the core philosophy and code standards +2. **Consult package-specific rules** when working in individual packages +3. **Package rules extend, not override** shared standards (e.g., a backend package adds validation requirements but doesn't remove the 25-line method limit) + +## 1. Project Overview + +Ckipper is a zsh-based wrapper for the [Claude Code CLI](https://claude.ai/cli) that provides: + +- **Multi-account isolation** — separate `~/.claude-/` config dirs per registered account, with macOS Keychain integration for credentials. +- **Worktree-aware launchers** — `w ` creates a git worktree, syncs settings, and either runs Claude Code locally or launches it inside a hardened Docker container. +- **Per-account aliases** — auto-generated `` shell functions (e.g., `personal`, `work`) that route to the correct config dir. +- **Safety hooks** — Claude Code hooks (`bash-guardrails.sh`, `protect-claude-config.sh`) that block destructive commands and protect Ckipper-owned config files from accidental modification. +- **Docker sandbox** — Dockerfile + entrypoint that isolate Claude Code with an egress firewall, credential injection via tmpfs, and pre-installed MCP server tooling. + +**Top-level layout:** + +``` +ckipper.zsh # ckipper CLI entry (sourced from .zshrc) +lib/core/ # shared primitives (registry, keychain, config, prompt, style) +lib/account/ # ckipper account subcommands (feature dir) +lib/worktree/ # ckipper worktree subcommands (feature dir) +lib/config/ # ckipper config get/set/unset/list/edit (feature dir) +lib/setup/ # ckipper setup wizard (orchestration: delegates to features) +lib/run/ # ckipper run shortcut for `worktree run` (orchestration) +lib/launcher/ # bare `ck` interactive menu (orchestration) +hooks/ # Claude Code safety hooks +docker/ # Dockerfile + entrypoint + firewall + cleanup +tests/ # bats + pytest tests +install.sh # one-shot installer (copies to ~/.ckipper/) +.claude/ # rules + project Claude config +``` + +`lib/` has two layers (per `.claude/rules/shell-conventions.md`): feature dirs own subcommand functionality and MUST NOT call into each other; orchestration dirs delegate to feature dirs' public entry points. + +## 2. Core Engineering Philosophy + +1. **KISS** — Keep It Simple, Stupid. The simplest solution that works is the best solution. +2. **Clarity over cleverness** — No tricks, no golf, no "elegant" one-liners that require a comment to explain. +3. **Functional decomposition** — Break problems into small, named, single-purpose functions. +4. **Object-Oriented Design** — Model the domain with clear objects, well-defined boundaries, and explicit contracts. +5. **Test what matters** — Unit tests are not optional. If logic makes a decision, it gets a test. ALWAYS WRITE TESTS. +6. **SOLID Principles** — Follow SOLID programming principles. + +## 3. Code Review Checklist + +Before approving any PR, verify: +- [ ] **Can I understand every method without reading its callees?** If no, the names need work. +- [ ] **There are NO MAGIC NUMBERS** +- [ ] **Is every method <= 25 lines?** NO EXCEPTIONS. +- [ ] **Is nesting <= 2 levels deep?** Extract if not. +- [ ] **Does each module/class have a single, obvious responsibility?** +- [ ] **Are there tests for every decision point in the logic?** +- [ ] **Is there any cleverness that should be replaced with clarity?** +- [ ] **Would a new teammate understand this in 5 minutes?** +- [ ] **Do new entry points (CLI, scripts, API endpoints) validate input at trust boundaries?** +- [ ] **Do exported functions have clear documentation when behavior isn't obvious from name and signature?** +- [ ] **Do route handlers and service methods log their outcomes (success, not-found, error)?** +- [ ] **Do error paths surface to monitoring — never swallowed silently?** + +## 4. Infrastructure & Services + +CI runs `make lint` + `make test-unit` on `macos-latest` via `.github/workflows/ci.yml`. + +## 5. Git Workflow + +**Branch naming:** `feature/{ticket-or-slug}-{short-description}` (e.g., `feature/123-user-auth`) +**Commit messages:** Reference the ticket if applicable (e.g., `#123: Implement user auth flow`) +**Always use feature branches + PRs.** NEVER commit directly to `main` or `develop`. +**ALWAYS create PRs as drafts** (`gh pr create --draft`). The author decides when to mark "Ready for review." +**PR description:** Link to the ticket if applicable, describe what changed and why, list affected files. + +## 6. AI-Specific Instructions + +- **Read and ingest before you edit.** Always read relevant source files before proposing changes. NEVER speculate about code you haven't inspected. +- **These rules are authoritative over observed codebase patterns.** If existing code violates a rule in this document or `.claude/rules/`, that is technical debt — not a convention to follow. Never justify bad practices because you see them elsewhere in the repo. When in doubt, follow the rules, not the code. +- **Follow existing design patterns that comply with these rules.** Study the relevant package and match the established architecture, file placement, and naming. If a convention exists and does not violate these rules, use it. If you have a clear technical reason to deviate, explain the rationale. +- **Reuse existing utility functions** +- **Reuse existing UI components** +- **Verify schema and queries against source files.** Check your ORM schema for table/column structure before writing code that references them. +- **Check existing types before creating new ones** to avoid duplication. Create new types when genuinely needed for new features. +- **Flag security concerns proactively** (exposed secrets, SQL injection, missing auth, etc.). +- **Use parallel tool calls** for independent operations (e.g., reading multiple files, running lint and test simultaneously). +- **Package context awareness:** When working in a specific package, prioritize that package's rule file. diff --git a/.claude/rules/code-style.md b/.claude/rules/code-style.md new file mode 100644 index 0000000..0ab6965 --- /dev/null +++ b/.claude/rules/code-style.md @@ -0,0 +1,56 @@ +# Code Style + +## Method Size & Complexity + +These are **hard limits**, not guidelines: +- **MAXIMUM 25 lines per method/function** (excluding blank lines and closing braces). +- **MAXIMUM 2 levels of control flow nesting** per method. If you need a third level, extract a method. +- **MAXIMUM 3 parameters** per method. Beyond that, introduce a parameter object or rethink the design. + +## YOU MUST USE EARLY RETURNS OVER DEEP NESTING + +Guard clauses go at the top. The happy path reads straight down. + +## Naming Conventions + +Names should be **descriptive and unambiguous**. A reader should never have to look at a method body to understand what it does. Avoid abbreviations. + +- **Functions/Methods**: language-idiomatic (snake_case in shell/Python, camelCase in JS/TS) — verbs (`getUserById`, `approve_request`, `calculateTotal`) +- **Types/Classes**: PascalCase — nouns (`InvoiceCalculator`, `ClaimValidator`) +- **Booleans**: prefix with `is`, `has`, `can`, `should` (`isEligible`, `has_access`) +- **Collections**: pluralize (`users`, `active_orders`, `pendingItems`) +- **Constants/env vars**: UPPER_SNAKE_CASE (`ALGORITHM`, `KEY_LENGTH`, `MAX_RETRY_COUNT`) +- **Files**: match the language's idiomatic convention (PascalCase for classes, kebab-case for shell scripts, snake_case for Python modules) + +## General Rules + +- **Use the language's strongest type discipline** — explicit over implicit. No escape hatches (`any`, untyped `unknown`, `eval`) without justification. +- **Prefer async-first APIs** in languages that support them (`async/await` over raw promises/callbacks). +- **One responsibility per file.** +- **NO MAGIC NUMBERS EVER** — ALWAYS EXTRACT TO A NAMED CONSTANT. + +## Code Documentation & Comments + +All code must include clear, human-readable documentation. Comments should be written so that a junior-level developer or higher can understand what is being done and why. + +**Public APIs are documented.** In languages with doc tooling (TSDoc/JSDoc, Python docstrings, Rustdoc, etc.), document every exported function, method, class, and interface — IDEs surface these as tooltips and they enable automated API-doc generation. + +**Required information** (using the language's doc syntax): +- Each parameter's purpose and constraints +- Return value and the conditions producing it +- Exceptions/errors the function may raise +- Usage example for non-trivial functions +- Cross-references to related code or docs +- Deprecation notice with a migration path + +**Inline comments** should explain "why," not "what." Comment business logic, workarounds, edge cases, and non-obvious decisions — not obvious code. + +## Linting + +Linting is enforced via `make lint` (locally) and `.github/workflows/ci.yml` (CI). Required tools: + +- **shellcheck** — all `.zsh` and `.sh` files. Configured via `.shellcheckrc` at repo root: `enable=all`, `disable=SC1090` (dynamic source paths are intentional in this project). Any other disables require a comment justifying the exception. +- **shfmt** — `shfmt -d -i 4 -ci -s` (4-space indent, indent case, simplify). Diffs MUST be empty in CI. +- **ruff** — Python files. Configured in `pyproject.toml`. Rules: `E`, `F`, `B`, `D` (docstring checks). Line length 100. + +Run `make bootstrap` once to install all linters via Homebrew + pip. diff --git a/.claude/rules/file-organization.md b/.claude/rules/file-organization.md new file mode 100644 index 0000000..7db6b2a --- /dev/null +++ b/.claude/rules/file-organization.md @@ -0,0 +1,36 @@ +# File Organization + +## Directory Size Limits + +These are **hard limits**, not guidelines: +- **MAXIMUM 10 source files per directory.** Count only source files — colocated test and story files do NOT count toward the cap. If a directory has 10 source files, the next file MUST go in a subdirectory. No exceptions. +- **Colocate test and story files with their source files.** `parser.py` and `parser_test.py` (and any colocated stories/fixtures) belong together in the same directory. + +When a directory approaches the cap, group related files into subdirectories by **domain**, **feature**, or **concern** — not by file type. Colocated tests and stories move with their source files into the subdirectory. + +## Directory Grouping + +When a directory needs subdirectories, group by **domain or feature**, not by file type. Keep related code together — all claim files in one place, not scattered across `types/`, `validators/`, and `handlers/`. + +**Exception:** Top-level `src/` directories MAY be organized by architectural layer (e.g., `api/`, `db/`, `event/`) when they represent distinct system boundaries. Within those layers, group by domain. + +## Dependency Direction + +Imports flow **downward and inward**, never upward or sideways across features. + +- **Parent directories MUST NOT import from child route/feature directories.** Shared code lives at the nearest common ancestor. +- **Sibling feature directories MUST NOT import from each other.** If two features need the same code, extract it to their shared parent or a `_shared/` directory. +- **Shared modules are pulled up, never reached into.** If two features both need a utility, it belongs in their common parent — not in one reaching into the other. + +## DO NOT MIMIC EXISTING BAD PATTERNS + +- **NEVER add files to a directory that already exceeds the 10-file cap.** Flag it to the user and propose a restructuring. +- **When creating new files, follow these rules from scratch** — do not pattern-match against nearby directories that may be poorly organized. + +## Code Review Checklist + +Before approving any PR, verify: +- [ ] **Is every directory under the 10-file cap (source files only)?** +- [ ] **Are test and story files colocated with their source files?** +- [ ] **Are subdirectories grouped by domain/feature, not by file type?** +- [ ] **Do imports flow downward — no parent importing from child, no sibling cross-imports?** diff --git a/.claude/rules/security.md b/.claude/rules/security.md new file mode 100644 index 0000000..4bbf9b6 --- /dev/null +++ b/.claude/rules/security.md @@ -0,0 +1,25 @@ +# Security + +## No-Touch Zones + +These files require **explicit approval** before any modification: + + + + + + +- Any `.env*` files, deployment configs, or CI/CD workflows +- Database migration files +- Authentication/authorization configuration +- Cryptography or encryption modules +- Financial calculation modules +- Files handling user secrets or credentials + +## Security Rules + +- **NEVER hardcode secrets in source code** +- **NEVER run destructive operations without confirmation** (DROP, TRUNCATE, DELETE without WHERE, `rm -rf`, force-push) +- **Validate all input at trust boundaries** — NEVER TRUST UNTRUSTED INPUT (CLI args, env vars, files, network requests; use Joi/Zod/equivalent for HTTP) +- **Flag security concerns proactively** — exposed secrets, injection (SQL, shell, command), missing auth, XSS, CSRF, etc. +- **Use lossless types for sensitive data** — money in minor units (cents) as integers, never floats; times in epoch ms or ISO 8601 diff --git a/.claude/rules/shell-conventions.md b/.claude/rules/shell-conventions.md new file mode 100644 index 0000000..7bfce18 --- /dev/null +++ b/.claude/rules/shell-conventions.md @@ -0,0 +1,64 @@ +# Shell Conventions + +zsh-specific clarifications of `code-style.md`, `file-organization.md`, and `testing.md`. This file only covers what those rules don't already specify; nothing is repeated. + +## Function-line counting + +The "25 lines per function" cap counts function-body lines, **excluding** blank lines and lines containing only a closing `}`. Comment lines count. + +## Doc-header convention + +zsh has no native docstrings. Document every public function with a comment block immediately above its definition with these labelled sections (in this order, each as needed): + +``` +# +# +# Args: $1 — …, $2 — … +# Returns: 0 on …; non-zero on … +# Errors (stderr): "" — +``` + +Omit `Args:` if the function takes none. Omit `Errors:` if it never writes to stderr. + +## Function-name prefixes + +Used to encode the dependency direction at a glance and let CI verify it: + +- `_core_*` — `lib/core/` (shared primitives) +- `_ckipper_account_*` — `lib/account/` (account subcommands) +- `_ckipper_worktree_*` — `lib/worktree/` (worktree subcommands) +- `_ckipper_config_*` — `lib/config/` (config get/set/unset/list/edit) +- `_ckipper_setup_*` — `lib/setup/` (first-run wizard) +- `_ckipper_run_*` — `lib/run/` (top-level `ckipper run` shortcut) +- `_ckipper_launcher_*` — `lib/launcher/` (bare-`ck` interactive menu) +- `_ckipper_*` — top-level dispatcher in `ckipper.zsh` (and `_ckipper_doctor`, kept un-namespaced because it's exposed as a top-level command, even though its source lives in `lib/account/`) +- No prefix — public, callable from `.zshrc`: `ckipper`, `ck` + +## Booleans + +zsh has no native bool. Use string values `"true"`/`"false"` and test with `[[ "$x" = "true" ]]`. (Don't use `0`/`1` integers with `(( x ))`.) + +## Module sourcing + +Modules under `lib/` are sourced once by `ckipper.zsh` (the single entry script sourced from `~/.zshrc`). Modules MUST NOT source siblings. + +The `lib/` tree has two layers: + +1. **Feature dirs** — `lib/account/`, `lib/worktree/`, `lib/config/`. Each owns a coherent slice of subcommand functionality. Feature dirs MUST NOT call into each other (account cannot call worktree, worktree cannot call config, etc.). Shared code goes in `lib/core/` per `file-organization.md`. + +2. **Orchestration dirs** — `lib/launcher/`, `lib/setup/`, `lib/run/`. Their entire purpose is to delegate to feature dirs (the bare-`ck` menu, the first-run wizard, the `ckipper run` top-level shortcut). Orchestration dirs MAY call public, namespaced entry points from feature dirs (e.g. `_ckipper_worktree_dispatch`, `_ckipper_account_add`, `_ckipper_worktree_run`). They MUST NOT reach into another orchestration dir's internals. + +`lib/core/` is callable from any layer. + +CI enforces the namespace separation via `make lint-merge-guards`. The grep-based guards catch any *reference* (definition or call) — feature siblings cannot reach into each other, and orchestration-only namespaces (`_ckipper_setup_*`, `_ckipper_run_*`, `_ckipper_launcher_*`) are pinned to their dirs: + +- `grep -rE '\b_w_[a-z]' lib/` — empty (no leftover renames from the merge) +- `grep -rE '\bW_[A-Z]' lib/` — empty (no leftover globals from the merge) +- `grep -rE '\b_ckipper_account_' lib/worktree/ lib/config/` — empty (sibling features can't call account) +- `grep -rE '\b_ckipper_worktree_' lib/account/ lib/config/` — empty (sibling features can't call worktree) +- `grep -rE '\b_ckipper_config_' lib/account/ lib/worktree/ lib/setup/ lib/run/ lib/core/` — empty (config namespace is pinned to lib/config/) +- `grep -rE '\b_ckipper_setup_' lib/account/ lib/worktree/ lib/config/ lib/run/ lib/core/` — empty (setup namespace is pinned to lib/setup/) +- `grep -rE '\b_ckipper_run_' lib/account/ lib/worktree/ lib/setup/ lib/config/ lib/core/` — empty (run namespace is pinned to lib/run/) +- `grep -rE '\b_ckipper_launcher_' lib/account/ lib/worktree/ lib/setup/ lib/config/ lib/run/ lib/core/` — empty (launcher namespace is pinned to lib/launcher/) + +Orchestration dirs (`lib/launcher/`, `lib/setup/`, `lib/run/`) are *omitted* from the account/worktree/config guards by design — that's the dispatcher exception. Adding them would block the only legal pattern of cross-imports. diff --git a/.claude/rules/testing.md b/.claude/rules/testing.md new file mode 100644 index 0000000..683f061 --- /dev/null +++ b/.claude/rules/testing.md @@ -0,0 +1,56 @@ +# Testing + +## Philosophy + +Tests are **documentation** that happens to be executable. A test should read like a specification of behavior. + +## Structure: Arrange -> Act -> Assert + +Separate the three phases with blank lines: + +```python +class TestOrderProcessor: + def test_creates_order_for_valid_request(self): + request = RequestFixture.valid() + processor = build_order_processor() + + order = processor.submit(request) + + assert order.status == OrderStatus.PENDING + assert order.request_id == request.id + + def test_rejects_invalid_requests(self): + request = RequestFixture.invalid() + processor = build_order_processor() + + with pytest.raises(InvalidRequestError): + processor.submit(request) +``` + +## What to Test + +- **Processors and business logic** — Always. This is the core of the system. +- **Utility/helper functions** — Always. They're pure and easy to test. +- **Financial calculations** — Always. Money math must be bulletproof. +- **Entry points / handlers** — Integration tests for the happy path and key error cases. +- **Observable behavior, not implementation** — When testing components or modules with internal state, assert on the externally visible outcomes, not on private structure. + +## What NOT to Test + +- Simple getters/setters or data classes with no logic. +- Framework boilerplate (middleware wiring, route config, loader setup). +- Third-party library behavior. + +## Test Doubles + +Prefer **hand-written fakes** over mocking libraries. Fakes are simpler, more readable, and catch interface drift at compile time. + +## Quality Gates + +**Before committing, ALWAYS run** the project's verification commands (build / test / lint / typecheck — whichever apply). + + diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml new file mode 100644 index 0000000..da09e3c --- /dev/null +++ b/.github/workflows/ci.yml @@ -0,0 +1,32 @@ +name: CI + +on: + pull_request: + push: + branches: [develop, main] + +# Default to read-only permissions for the GITHUB_TOKEN. Individual jobs +# that need write access (e.g. release publishing) must opt in explicitly. +permissions: + contents: read + +jobs: + lint-and-test: + runs-on: macos-latest + timeout-minutes: 15 + steps: + - uses: actions/checkout@v4 + + - name: Install dev tools + run: brew install bats-core shellcheck shfmt gum + + - name: Install Python tools + run: | + python -m pip install --upgrade pip + pip install ruff pytest + + - name: Lint + run: make lint + + - name: Test (unit) + run: make test-unit diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..6fb6300 --- /dev/null +++ b/.gitignore @@ -0,0 +1,28 @@ +# Implementation plan artifacts and design docs (per repo policy, commit 5cef1de) +docs/plans/ + +# Editor / IDE local state +.vscode/ +.idea/ +*.swp +*.swo +*.bak + +# OS cruft +.DS_Store + +# Python tooling caches +.pytest_cache/ +.ruff_cache/ +__pycache__/ +*.pyc + +# Claude Code per-machine settings (also covered by user's global gitignore, but defensive) +.claude/settings.local.json +.claude/.session_*.json + +# install.sh creates timestamped .zshrc backups +.zshrc.ckipper-bak.* + +# Node modules (in case anyone runs install steps locally) +node_modules/ diff --git a/.shellcheckrc b/.shellcheckrc new file mode 100644 index 0000000..9add4d1 --- /dev/null +++ b/.shellcheckrc @@ -0,0 +1,38 @@ +# Project-wide shellcheck config. +# Strict mode: enable all checks, then explicitly disable rules with reasons. +enable=all + +# SC1090: "Can't follow non-constant source" — intentional for dynamic module loading +# (e.g., `source "$CKIPPER_DIR/lib/core/registry.zsh"`). +disable=SC1090 + +# SC1091: "Not following: file path" — same reason as SC1090 for sourced modules +# that aren't on disk during lint runs. +disable=SC1091 + +# Baseline disables — TODO(phase-4): resolve and remove each entry as the +# corresponding code is refactored. These rules currently fire in unrefactored +# bash scripts (.sh files only — zsh files use `zsh -n` for syntax checks). + +# TODO(phase-4): SC2001 — sed could be replaced with parameter expansion. +disable=SC2001 + +# TODO(phase-4): SC2016 — single quotes around $VAR don't expand (often intentional in jq filters). +disable=SC2016 + +# TODO(phase-4): SC2086 — unquoted variable expansions; review case-by-case. +disable=SC2086 + +# TODO(phase-4): SC2129 — multiple consecutive redirects; consider grouping. +disable=SC2129 + +# TODO(phase-4): SC2154 — variable referenced but not assigned (often set elsewhere). +disable=SC2154 + +# Style-only rules (`enable=all` includes these). Defer to project preference; keep disabled. +# SC2250: prefer braces around variable references — adds noise, low value. +# SC2292: prefer [[ ]] over [ ] — fine but pre-existing convention. +# SC2312: consider invoking command separately to take exit code — case-by-case. +disable=SC2250 +disable=SC2292 +disable=SC2312 diff --git a/AGENTS.md b/AGENTS.md new file mode 120000 index 0000000..ac55cbd --- /dev/null +++ b/AGENTS.md @@ -0,0 +1 @@ +.claude/CLAUDE.md \ No newline at end of file diff --git a/CHANGELOG.md b/CHANGELOG.md new file mode 100644 index 0000000..b55debc --- /dev/null +++ b/CHANGELOG.md @@ -0,0 +1,83 @@ +# Changelog + +All notable changes to this project will be documented in this file. + +The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), +and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). + +## [Unreleased] — CLI + onboarding overhaul + +### Sync system overhaul + +- **New:** `ckipper account sync` is fully interactive by default. Run with no args to pick source, targets, and types via gum pickers; pass positional args to skip the relevant pickers. +- **New:** 10 syncable types covering every shareable Claude Code config category: MCP servers, settings (top-level + nested keys), CLAUDE.md, agents, commands, output-styles, skills, statusline (with internal/external script detection), user-written hooks (filtered against the install allowlist), and account preferences. +- **New:** Named bundles `all`, `customizations`, `claude-config`, `preferences` for `--include` / `--exclude`. +- **New:** Multi-destination support — sync from one account to many in a single invocation. +- **New:** Summary table preview with `git status`-style status badges (`[+]` / `[~]`) and on-demand drill-down for any per-item diff. +- **New:** Timestamped backups before any destructive write — `/.ckipper-sync-backups/-from-/` — with manifest-driven restore via `ckipper account sync undo [--pick | --list]`. +- **New:** Hard refusal when any Claude CLI process is running (override with `--force`). macOS does not expose other processes' `CLAUDE_CONFIG_DIR` env var to non-privileged callers, so the refusal cannot be filtered down to "Claude on this destination dir specifically" — it is conservative by design. +- **New:** Setup wizard offers initial sync after adding a 2nd-or-later account. +- **Renamed:** `ckipper account sync-hooks` → `ckipper account redeploy-hooks`. The new name reflects that it deploys the ckipper-managed safety hooks from the install dir to every account; it is NOT peer-to-peer sync. +- **Removed:** Old flag surface (`--mcp [names]`, `--settings `, `--all`). The new `--include` / `--exclude` model with bundles supersedes these. + +### Added +- `ckipper setup` — interactive wizard for configuring Ckipper. Re-runnable. +- `ckipper config get / set / unset / list / edit` — view and modify settings. +- `ckipper run ` — top-level shortcut for `ckipper worktree run`. +- Bare `ck` (no args) — interactive launcher menu. +- `ckipper doctor --fix` — gum-driven repairs, including the former `repair-plugins` flow. +- Per-account preferences: `always_docker`, `always_firewall`, `ssh_forward`, populating flag defaults for `ck run`/`ck wt run`. +- New global config keys: `CKIPPER_DEFAULT_BRANCH`, `CKIPPER_DEP_INSTALL_CMD`, `CKIPPER_NOTIFY_BELL`, `CKIPPER_ALIASES_AUTO_SOURCE`. +- New flags: `--no-docker`, `--no-firewall`, `--ssh-forward`, `--no-ssh-forward` (override per-account preferences inline). +- Auto-detection of `origin/HEAD` for the worktree base branch. +- Restyled output across `ck account list`, `ck worktree list`, `ck doctor`, and all `--help` text via shared `lib/core/style.zsh`. +- `ck doctor` validates `accounts.json` v2 preferences shape and `ckipper-config.zsh` keys against the schema. + +### Changed +- `accounts.json` schema bumped v1 → v2; auto-migrates on first command after upgrade. Backup written to `accounts.json.v1.bak.`. +- `install.sh` now ends by auto-invoking `ckipper setup` in interactive shells. +- `gum` is a hard prereq (added to `install.sh` prereq check and `make bootstrap`). + +### Removed +- `ckipper account repair-plugins` (folded into `ckipper doctor --fix`). + +## [0.2.0] — 2026-04-30 — Breaking changes: merge `w` into `ckipper` + +### Removed +- `w()` shell function (replaced by `ckipper worktree run`). +- `ckipper migrate` subcommand (legacy claude-docker-sandbox migration). +- `~/.ckipper/docker/w-function.zsh` (replaced by `~/.ckipper/docker/ckipper.zsh`; `install.sh` deletes the stale file and rewrites `~/.zshrc`). +- `~/.ckipper/docker/w-config.zsh` (replaced by `~/.ckipper/docker/ckipper-config.zsh`; `install.sh` migrates content automatically). +- `~/.zsh/completions/_w` (replaced by `~/.zsh/completions/_ckipper`). + +### Renamed +- `lib/w/` → `lib/worktree/`. +- `lib/ckipper/` → `lib/account/`. +- `_w_*` functions → `_ckipper_worktree_*`. +- `_ckipper_*` (account ops) → `_ckipper_account_*` (`_ckipper_doctor*` and the top-level dispatcher helpers stay un-namespaced). +- `W_PROJECTS_DIR` / `W_WORKTREES_DIR` / `W_PORTS` / `W_EXTRA_VOLUMES` / `W_EXTRA_ENV` / `W_REPO_DIR` / `W_COMPLETION_VERSION` → `CKIPPER_*` (same suffix; user-configurable via `ckipper-config.zsh`). +- Worktree runtime globals (`W_FLAG_*`, `W_PROJECT`, `W_BRANCH`, `W_CLI_ACCOUNT`, `W_COMMAND`, `W_ACTIVE_*`, `W_WT_PATH`, `W_RESOLVED_PORTS`, `W_DOCKER_ARGS`, `W_FIND_MAX_DEPTH`) → `CKIPPER_WT_*`. + +### Added +- Namespaced commands: `ckipper account ...`, `ckipper worktree ...`. +- Short namespace forms: `ckipper acct ...`, `ckipper wt ...` (and the existing `ck` as a shorthand for `ckipper`). +- Universal fuzzy-suggest on unknown subcommands (Levenshtein distance ≤ 2). +- Per-subcommand `--help` / `-h` for every command (e.g. `ckipper account add --help`). +- `lib/core/fuzzy.zsh` — `_core_fuzzy_suggest` helper. +- `make lint-merge-guards` — four CI grep guards that catch leftover `_w_*`/`W_*` references and cross-namespace imports. + +### Migration +Run `./install.sh`. It rewrites `~/.zshrc`, deletes stale paths (`w-function.zsh`, `_w` completion file), and preserves your existing `w-config.zsh` content as `ckipper-config.zsh` (variable names already match — they were renamed in this release). + +## [0.1.0] — 2026-04-28 + +### Added +- Initial public release. +- Multi-account Claude Code management (`ckipper add/remove/rename/list/default/sync/doctor/migrate`; renamed to `ckipper account *` in 0.2.0). +- `w()` worktree-aware launcher with normal and Docker (`--docker --firewall`) modes. +- Auto-generated per-account aliases. +- Safety hooks: `bash-guardrails.sh`, `protect-claude-config.sh`, `docker-context.sh`, `notify-bell.sh`. +- Docker sandbox with egress firewall. +- Modular architecture: `lib/core/` (shared) + `lib/ckipper/` + `lib/w/`. +- Test suite: bats-core (shell) + pytest (Python). +- CI: GitHub Actions on macos-latest. diff --git a/CLAUDE.md b/CLAUDE.md deleted file mode 100644 index 467d271..0000000 --- a/CLAUDE.md +++ /dev/null @@ -1,54 +0,0 @@ -# CLAUDE.md - -Docker-based sandbox for running Claude Code with `--dangerously-skip-permissions` safely. The `w()` shell function creates a git worktree, launches a Docker container, and runs Claude autonomously inside it. One command to spin up an isolated session on any project. - -## Architecture - -- **`w-function.zsh`** — zsh function that manages worktrees (`git worktree add`), builds/runs Docker containers, extracts macOS Keychain credentials, forwards ports, and detects `.git/config` tampering post-session. Includes tab completion. -- **`w-config.zsh.example`** — Template for user-specific Docker config (ports, volume mounts, env vars). Copied to `~/.claude/docker/w-config.zsh` on first install, never overwritten on updates. -- **`docker/Dockerfile`** — `node:24-slim` image with dev tools (git, gh, ripgrep, tmux, Chromium, uv/uvx, bun, Claude Code native installer). Runs as non-root `claude` user. -- **`docker/entrypoint.sh`** — Container startup: copies `.claude.json` from read-only staging mount, writes credentials to disk, sets git identity, disables GPG signing via `GIT_CONFIG_COUNT`, authenticates `gh` CLI, optionally enables firewall, creates `bunx` wrapper for statusline colors, runs `npm install` for Linux binaries, clears credential env vars, then runs the provided command (or drops to bash shell if none). -- **`hooks/`** — Four Claude Code hooks (registered in `settings-hooks.json`): - - `protect-claude-config.sh` — PreToolUse on Edit/Write: blocks modifications to `.claude/settings.json`, hooks, plugins - - `bash-guardrails.sh` — PreToolUse on Bash: blocks `rm -rf`, `git push --force`, `git reset --hard`, `.git/hooks` writes, recursive `chmod`/`chown`, credential file reads, Claude config modification via shell - - `docker-context.sh` — SessionStart: injects safety rules so Claude avoids triggering guardrails - - `notify-bell.sh` — Notification: sends terminal bell (`\a`) so host terminal fires native notifications (dock bounce, sound) - - All four are no-op on the host (exit early if `/.dockerenv` doesn't exist) -- **`docker/init-firewall.sh`** — Optional `iptables-legacy` egress whitelist (default-deny). Uses `--cap-add=NET_ADMIN`. - -## Critical Safety Rules - -**Never modify the host's `.git/config` from inside the container.** The container mounts the host's `.git` directory read-write (required for worktree refs to resolve). Any `git config --local` command modifies the host's actual git config. Use `GIT_CONFIG_COUNT` environment variables instead — they take highest priority in git's config precedence and disappear when the container exits. - -**Clear credentials from the environment before `exec claude`.** The entrypoint receives `CLAUDE_CREDENTIALS` and `GH_TOKEN` as env vars, writes them to disk, then `unset`s them before `exec`. The `exec` replaces the process, so `/proc/self/environ` is clean. If you add new credential env vars, follow this same pattern. - -**`.claude.json` is mounted read-only as `.claude-host.json`.** The entrypoint copies it to a writable location. This prevents the container from racing with the host's Claude process on the same file. If you need data from `.claude.json`, read from the copy, not the mount. - -**Hooks prevent accidents, not adversarial bypass.** Absolute paths (`/bin/rm`), language-level file access (`python3 -c "open(...).read()"`), and symlink indirection can bypass the bash guardrails. This is accepted. Don't over-engineer the regex matching. - -## Development Workflow - -| Change | Action Required | -|--------|----------------| -| `Dockerfile` | `w --rebuild-image` | -| `entrypoint.sh` | `w --rebuild-image` (it's `COPY`'d into the image) | -| `init-firewall.sh` | `w --rebuild-image` (it's `COPY`'d into the image) | -| `w-function.zsh` | `./install.sh` (copies to `~/.claude/docker/`; user config in `w-config.zsh` is preserved) | -| `w-config.zsh.example` | Template only — user's `~/.claude/docker/w-config.zsh` is never overwritten | -| `hooks/*` | Sync to `~/.claude/hooks/` | -| `settings-hooks.json` | `./install.sh` (auto-merged into `~/.claude/settings.json`) | - -Two copies of the code exist: this repo (development) and deployed files on the host (`~/.claude/docker/`, `~/.claude/hooks/`). Run `./install.sh` to sync all core files. User customizations live in `~/.claude/docker/w-config.zsh` and are never overwritten. - -## Testing - -`test-prompt.md` is the validation suite. It has 11 sections covering entrypoint verification, filesystem access, git operations, build tools, safety hooks (including bypass attempts), and container isolation. Run it by starting a Docker session (`w --docker claude`) and pasting the prompt contents. - -## Key Implementation Details - -- **npm install in entrypoint**: Replaces macOS native binaries (rollup, biome, esbuild, swc) with Linux ones. This is intentional — the worktree's `node_modules` were installed on macOS. -- **`TURBO_CACHE_DIR`**: Set to `/workspace/.turbo/cache` because worktree git roots resolve to the host's main repo path, which isn't writable in the container. -- **`gh auth`**: Must `unset GH_TOKEN` before `gh auth login --with-token` because gh refuses to store credentials while the env var is set. Then `gh auth setup-git` configures gh as the git credential helper for HTTPS push. -- **`core.hooksPath`**: Set globally to `~/.git-hooks` on the host so git ignores `.git/hooks/` — prevents planted hooks from executing on the host after the container exits. -- **Port forwarding**: Ports that are already in use on the host are silently skipped. -- **Worktree removal** (`w --rm`): Also cleans up the project entry from `~/.claude.json`. diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md new file mode 100644 index 0000000..f4d9d33 --- /dev/null +++ b/CODE_OF_CONDUCT.md @@ -0,0 +1,83 @@ +# Contributor Covenant Code of Conduct + +## Our Pledge + +We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation. + +We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. + +## Our Standards + +Examples of behavior that contributes to a positive environment for our community include: + +* Demonstrating empathy and kindness toward other people +* Being respectful of differing opinions, viewpoints, and experiences +* Giving and gracefully accepting constructive feedback +* Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience +* Focusing on what is best not just for us as individuals, but for the overall community + +Examples of unacceptable behavior include: + +* The use of sexualized language or imagery, and sexual attention or advances of any kind +* Trolling, insulting or derogatory comments, and personal or political attacks +* Public or private harassment +* Publishing others' private information, such as a physical or email address, without their explicit permission +* Other conduct which could reasonably be considered inappropriate in a professional setting + +## Enforcement Responsibilities + +Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. + +Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. + +## Scope + +This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. + +## Enforcement + +Instances of abusive, harassing, or otherwise unacceptable behavior may be reported privately to the community leaders responsible for enforcement via a [GitHub Security Advisory](https://github.com/mswdev/Ckipper/security/advisories/new). All complaints will be reviewed and investigated promptly and fairly. + +All community leaders are obligated to respect the privacy and security of the reporter of any incident. + +## Enforcement Guidelines + +Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: + +### 1. Correction + +**Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. + +**Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. + +### 2. Warning + +**Community Impact**: A violation through a single incident or series of actions. + +**Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. + +### 3. Temporary Ban + +**Community Impact**: A serious violation of community standards, including sustained inappropriate behavior. + +**Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. + +### 4. Permanent Ban + +**Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. + +**Consequence**: A permanent ban from any sort of public interaction within the community. + +## Attribution + +This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.1, available at [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1]. + +Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder][Mozilla CoC]. + +For answers to common questions about this code of conduct, see the FAQ at [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at [https://www.contributor-covenant.org/translations][translations]. + +[homepage]: https://www.contributor-covenant.org +[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html +[Mozilla CoC]: https://github.com/mozilla/diversity +[FAQ]: https://www.contributor-covenant.org/faq +[translations]: https://www.contributor-covenant.org/translations diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 0000000..26645ed --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,69 @@ +# Contributing to Ckipper + +Thanks for considering a contribution! + +## Quick start + +```sh +make bootstrap # installs bats-core, shellcheck, shfmt, ruff, pytest +make test # run all tests +make lint # run all linters +``` + +## Workflow + +1. Branch off `develop`. Branch name: `feature/{ticket-or-slug}-{short-description}`. +2. Make changes. Tests required for any decision-heavy logic — see [`.claude/rules/testing.md`](.claude/rules/testing.md). +3. `make test && make lint` must pass before you request review. +4. Open a **draft** PR targeting `develop`. The author marks it "Ready for review" when they're happy. + +## Code style + +We follow the rules in [`.claude/rules/`](.claude/rules/) — please read them. Highlights: + +- **25-line function cap** (excluding blank lines and `}`-only lines). HARD LIMIT. +- **2-level nesting cap.** Use early returns / guard clauses. +- **3-parameter cap.** Beyond that, introduce a context object or rethink the design. +- **No magic numbers.** Extract to `readonly UPPER_SNAKE_CASE` constants. +- **No abbreviations.** `idx` → `index`, `ans` → `user_choice`, `tmp` → `*_tmpfile`. +- **Doc-headers on every public function.** See [`.claude/rules/shell-conventions.md`](.claude/rules/shell-conventions.md). + +## File organization + +- `lib/core/` — shared primitives (registry, keychain, utils, fuzzy). Used by both account and worktree namespaces. +- `lib/account/` — `ckipper account` subcommands. Function prefix: `_ckipper_account_*`. +- `lib/worktree/` — `ckipper worktree` subcommands. Function prefix: `_ckipper_worktree_*`. **Must NOT call `_ckipper_account_*` functions** (sibling cross-import). CI enforces this via `make lint-merge-guards`. +- Tests are colocated with source: `foo.zsh` + `foo_test.bats`. + +## Adding a new config key + +1. Add the key to all four arrays in `lib/core/schema.zsh` — `_CKIPPER_SCHEMA_TYPE`, `_DEFAULT`, `_SCOPE`, `_DESCRIPTION`. +2. The key is now usable via `ck config get/set/unset/list` and appears in the wizard automatically. +3. If the key affects worktree-creation behavior, update `lib/worktree/worktree.zsh` to read it via `_core_config_get`. +4. Add a test in `lib/core/schema_test.bats` to cover the new declaration. + +## Module structure + +- `lib/core/` — shared primitives (`style.zsh`, `help.zsh`, `prompt.zsh`, `config.zsh`, registry, keychain, utils, fuzzy) +- `lib/account/` — account namespace (`_ckipper_account_*`) +- `lib/worktree/` — worktree namespace (`_ckipper_worktree_*`) +- `lib/config/` — config namespace (`_ckipper_config_*`) +- `lib/setup/` — wizard (`_ckipper_setup_*`) +- `lib/run/` — top-level `run` shortcut (`_ckipper_run_*`) +- `lib/launcher/` — bare `ck` interactive menu (`_ckipper_launcher_*`) + +CI guards in `make lint-merge-guards` enforce that each prefix only appears in its owning directory. + +## Test-mode prompt fallback + +`lib/core/prompt.zsh` honors `CKIPPER_NO_GUM=1` to fall back to pure-zsh `read` / numeric-pick. Tests set this env var so they don't depend on the gum binary or a TTY. + +## Testing + +- **Shell:** bats-core. Hand-written stubs under `tests/lib/stubs/`. No mocking libraries. +- **Python:** pytest. +- **Test what matters:** business logic, decision points, file I/O contracts. Skip trivial wrappers. + +## Reporting bugs / feature requests + +Open an issue on GitHub. For security vulnerabilities, see [`SECURITY.md`](SECURITY.md) — please do NOT open a public issue. diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000..5ab5305 --- /dev/null +++ b/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2026 mswdev + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/Makefile b/Makefile new file mode 100644 index 0000000..06273b5 --- /dev/null +++ b/Makefile @@ -0,0 +1,62 @@ +.PHONY: bootstrap test test-unit test-integration lint lint-shell lint-zsh lint-py lint-fmt lint-merge-guards install help + +help: + @echo "make bootstrap - install dev tools (bats, shellcheck, shfmt, ruff, pytest)" + @echo "make test - run all tests (bats + pytest)" + @echo "make test-unit - bats + pytest, fast suites only" + @echo "make test-integration - integration tests (BATS_INTEGRATION=1)" + @echo "make lint - run all linters" + @echo "make install - run ./install.sh" + +bootstrap: + brew install bats-core shellcheck shfmt gum + pip install ruff pytest + +test: test-unit + +test-unit: + bats --recursive . + pytest + +test-integration: + BATS_INTEGRATION=1 bats --recursive . + +lint: lint-shell lint-zsh lint-fmt lint-py lint-merge-guards + +# shellcheck only on .sh files (bash). .zsh files use `zsh -n` for syntax check. +lint-shell: + shellcheck install.sh + shellcheck hooks/*.sh + shellcheck docker/entrypoint.sh docker/init-firewall.sh + +lint-zsh: + zsh -n ckipper.zsh + @if [ -d lib ]; then \ + find lib -name '*.zsh' -not -name '*_test.bats' | while read -r f; do zsh -n "$$f" || exit 1; done; \ + fi + +lint-fmt: + shfmt -d -i 4 -ci -s install.sh hooks/*.sh docker/entrypoint.sh docker/init-firewall.sh + +lint-py: + ruff check docker/cleanup-projects.py + @if [ -d lib ]; then find . -name '*.py' -not -name '*_test.py' -not -path './tests/*' -exec ruff check {} +; fi + +# Catch leftover references from the w → ckipper merge. +# Each guard MUST return zero matches; if any fires, fix the source rather than the guard. +# `doctor.zsh` is exempted from the W_* check: it intentionally references +# pre-merge variable names to detect stale user config left behind by the +# rename. Any other leftover W_* assignment is a bug. +lint-merge-guards: + @! grep -rE '\b_w_[a-z]' lib/ ckipper.zsh 2>/dev/null || (echo "lint-merge-guards: leftover _w_* function references in lib/ or ckipper.zsh" >&2 && exit 1) + @! grep -rE --exclude=doctor.zsh '\bW_[A-Z]' lib/ ckipper.zsh templates/ 2>/dev/null || (echo "lint-merge-guards: leftover W_* globals in lib/, ckipper.zsh, or templates/" >&2 && exit 1) + @! grep -rE '\b_ckipper_account_' lib/worktree/ lib/config/ 2>/dev/null || (echo "lint-merge-guards: feature dir contains account-namespace references (sibling features must not import; see shell-conventions.md)" >&2 && exit 1) + @! grep -rE '\b_ckipper_worktree_' lib/account/ lib/config/ 2>/dev/null || (echo "lint-merge-guards: feature dir contains worktree-namespace references (sibling features must not import; see shell-conventions.md)" >&2 && exit 1) + @! grep -rE '\b_ckipper_config_' lib/account/ lib/worktree/ lib/setup/ lib/run/ lib/core/ 2>/dev/null || (echo "lint-merge-guards: config-namespace reference outside lib/config/ (siblings + lower layers cannot reach in)" >&2 && exit 1) + @! grep -rE '\b_ckipper_setup_' lib/account/ lib/worktree/ lib/config/ lib/run/ lib/core/ 2>/dev/null || (echo "lint-merge-guards: setup-namespace reference outside lib/setup/" >&2 && exit 1) + @! grep -rE '\b_ckipper_run_' lib/account/ lib/worktree/ lib/setup/ lib/config/ lib/core/ 2>/dev/null || (echo "lint-merge-guards: run-namespace reference outside lib/run/" >&2 && exit 1) + @! grep -rE '\b_ckipper_launcher_' lib/account/ lib/worktree/ lib/setup/ lib/config/ lib/run/ lib/core/ 2>/dev/null || (echo "lint-merge-guards: launcher-namespace reference outside lib/launcher/" >&2 && exit 1) + @! grep -rE '^_core_[a-z_]+\(\)' lib/account/ lib/worktree/ lib/config/ lib/setup/ lib/run/ lib/launcher/ --include='*.zsh' 2>/dev/null || (echo "lint-merge-guards: _core_* function defined outside lib/core/ (see .claude/rules/shell-conventions.md — _core_* is reserved for lib/core/)" >&2 && exit 1) + +install: + ./install.sh diff --git a/README.md b/README.md index 94be060..b5e9cc6 100644 --- a/README.md +++ b/README.md @@ -1,121 +1,281 @@ -# Claude Docker Sandbox +# Ckipper (pronounced "skipper") -Docker-based isolation for running Claude Code with `--dangerously-skip-permissions` safely. One command to spin up a sandboxed autonomous Claude session on any project. +> _This project is vibe-engineered. I use it personally for my own setup and it works well; use it at your own risk. Works on my machine ;) See [Contributing](#contributing)._ -Inspired by [incident.io's worktree workflow](https://incident.io/blog/shipping-faster-with-claude-code-and-git-worktrees) and [Rory Bain's gist](https://gist.github.com/rorydbain/e20e6ab0c7cc027fc1599bd2e430117d), extended with Docker containerization, egress firewall, safety hooks, and macOS Keychain auth integration. +> **Platform:** macOS only — uses macOS Keychain, Docker Desktop, and host SSH agent forwarding. + +A lightweight CLI for managing Claude Code accounts, worktrees, and Docker sandboxes. + +Inspired by [incident.io's worktree workflow](https://incident.io/blog/shipping-faster-with-claude-code-and-git-worktrees) and [Rory Bain's gist](https://gist.github.com/rorydbain/e20e6ab0c7cc027fc1599bd2e430117d), extended with Docker containerization, an egress firewall, safety hooks, macOS Keychain auth, and per-account isolation across credentials, settings, MCP, plugins, and projects. ## The Problem -Claude Code's `--dangerously-skip-permissions` lets Claude work autonomously without clicking Allow for every action. But running it on your actual machine means Claude has full access to your entire filesystem, credentials, and network. +`--dangerously-skip-permissions` lets Claude work autonomously without clicking Allow for every action — but on your actual machine it has full access to your filesystem, credentials, and network. Running it inside a container is the whole point. ## The Solution ```bash -w Whmoro/orderguard my-feature --docker claude +ck run myorg/myapp my-feature +``` + +Creates a git worktree, optionally spins up a Docker container, and runs Claude inside it. Claude thinks it has full permissions but can only see the worktree. Your other projects, system files, and credentials are inaccessible. + +## Quick start + +```bash +git clone https://github.com/mswdev/Ckipper.git +cd Ckipper +./install.sh # auto-launches `ckipper setup` at the end ``` -This creates a git worktree, spins up a Docker container, and runs Claude inside it. Claude thinks it has full permissions, but it can only access the worktree you gave it. Your Documents, other projects, and system files are completely inaccessible. +`install.sh` deploys files under `~/.ckipper/`, adds the source line to your `.zshrc`, and ends by running the interactive setup wizard. The wizard registers your first Claude account, sets your projects directory, and configures default behaviors. Re-runnable any time via `ckipper setup`. + +### Prerequisites + +- **macOS** with zsh +- **Docker Desktop** installed and running +- **Claude Code** installed and authenticated (`claude` command works) +- **GitHub auth**: SSH keys added to your SSH agent, or `gh auth login` on host +- **jq** and **gum** installed (`brew install jq gum`) + +## Core commands + +| Command | Purpose | +| --- | --- | +| `ck setup` | Interactive wizard for configuring Ckipper | +| `ck run ` | Create-or-cd to a worktree, optionally Docker | +| `ck config get/set/unset/list/edit` | View and modify settings | +| `ck account add/list/default/remove/rename/sync/redeploy-hooks` | Manage Claude accounts (see [Sync state between accounts](#sync-state-between-accounts)) | +| `ck worktree run/list/rm/rebuild-image` | Manage git worktrees | +| `ck doctor [--fix]` | Diagnose registry, hooks, schema; optionally repair | +| `ck` (no args) | Interactive launcher menu | -## What It Does +`ck` is a short alias for `ckipper`. `` is a relative path under `$CKIPPER_PROJECTS_DIR` (default `~/Developer/`, e.g. `myorg/myapp`). Tab completion is included. -- **Creates a git worktree** from `origin/develop` (or uses an existing branch) -- **Installs dependencies** and copies `.env` files from the main project -- **Launches a Docker container** with the worktree mounted at `/workspace` -- **Entrypoint sets up the environment**: installs Linux-native binaries, configures git identity, authenticates `gh` CLI, sets Turbo cache path, optionally enables firewall -- **Runs Claude Code** in autonomous mode inside the container (when `claude` is specified) -- **Destroys the container** on exit (`--rm`) — the worktree persists for review -- **Warns you** if `.git/config` was modified during the session +## Configuration -## Quick Reference +Ckipper has a single source of truth for user-configurable settings ([`lib/core/schema.zsh`](lib/core/schema.zsh)). Use `ck config` to view and modify settings, or `ck setup` to walk the wizard. + +### Global keys + +These live in `~/.ckipper/docker/ckipper-config.zsh`. + +| Key | Type | Default | Description | +| --- | --- | --- | --- | +| `CKIPPER_PROJECTS_DIR` | path | `~/Developer` | Base directory containing your git projects | +| `CKIPPER_WORKTREES_DIR` | path | `$projects_dir/.worktrees` | Where worktrees are created | +| `CKIPPER_PORTS` | int array | `(3000)` | Ports to forward from container to host | +| `CKIPPER_DEFAULT_BRANCH` | string | _(empty)_ | Fallback base branch when `origin/HEAD` is unset | +| `CKIPPER_DEP_INSTALL_CMD` | string | `npm install` | Command run after worktree creation; empty = skip | +| `CKIPPER_NOTIFY_BELL` | bool | `true` | Install notify-bell hook into account dirs | +| `CKIPPER_ALIASES_AUTO_SOURCE` | bool | `true` | `install.sh` auto-adds aliases.zsh source line to `.zshrc` | + +`CKIPPER_EXTRA_VOLUMES` and `CKIPPER_EXTRA_ENV` are also defined in the same file (raw zsh arrays, not schema-managed). See "MCP support" below. + +### Per-account preferences + +These live in `~/.ckipper/accounts.json` under each account's `preferences` block. + +| Key | Type | Default | Description | +| --- | --- | --- | --- | +| `always_docker` | bool | `false` | Default `--docker` on for this account | +| `always_firewall` | bool | `false` | Default `--firewall` on for this account | +| `ssh_forward` | bool | `true` | Forward host `~/.ssh` into containers run with this account | + +### Flag override semantics + +Per-account preferences populate flag defaults; pass `--no-docker` / `--no-firewall` / `--no-ssh-forward` to override per invocation. Conversely, pass `--docker` / `--firewall` / `--ssh-forward` to opt in for a single invocation when the preference is off. + +## Multiple accounts + +Run a personal account in one terminal and a work account in another, fully isolated. Each gets its own credentials, MCP servers, plugins, projects, and session history. + +### Add an account ```bash -w myorg/myapp feature-x --docker claude # Claude in Docker (skip-permissions) -w myorg/myapp feature-x --docker # shell in Docker container -w myorg/myapp feature-x --docker --firewall # Docker + egress firewall -w myorg/myapp feature-x # cd to worktree (no Docker) -w myorg/myapp feature-x claude # run Claude in worktree (no Docker) -w --list # list all worktrees -w --rm myorg/myapp feature-x # remove worktree + delete branch -w --rebuild-image # rebuild Docker image +ckipper account add work +``` + +`ckipper` walks you through `/login` and registers the account. Repeat for every account you want. + +### Use an account + +```bash +claude-work # auto-generated launcher +work # bare-name shortcut (skipped if it would shadow an existing command) +CLAUDE_CONFIG_DIR=~/.claude-work claude # raw form +``` + +`ckipper account add` re-sources `aliases.zsh` in your current shell, so new launchers are usable immediately — no `exec zsh`. + +### Inside Docker + +```bash +ck run myorg/app feature --account work --docker +``` + +If you're already in a terminal where `CLAUDE_CONFIG_DIR` is set (e.g., via `claude-work`), `ckipper run` picks up the account automatically — no flag needed. + +### List, default, remove + +```bash +ckipper account list +ckipper account default personal +ckipper account remove old-account +``` + +### How accounts are stored + +- Per-account state lives in `~/.claude-/` (analogous to the legacy `~/.claude/`). +- The registry mapping accounts to dirs and Keychain services lives at `~/.ckipper/accounts.json` (chmod 600, atomic writes via `flock`). The file is on `accounts.json` schema v2 — preferences (`always_docker`, `always_firewall`, `ssh_forward`) are stored per account. +- Auto-generated `~/.ckipper/aliases.zsh` defines `claude-` (and a bare `` shortcut, when it doesn't shadow an existing command) per registered account. +- Hooks under `~/.ckipper/hooks/` are the canonical source — they're synced into each registered account dir automatically after install/setup. + +### Don't run the same account in two sessions + +Two terminals running the **same** account simultaneously will hit a known OAuth refresh-token race ([upstream issue #24317](https://github.com/anthropics/claude-code/issues/24317)) — symptoms: frequent re-login prompts, lost sessions. + +- **Safe:** `claude-personal` in one terminal, `claude-work` in another. Different accounts, different refresh tokens, no race. +- **Bad:** `claude-personal` in two terminals at once. + +If you want concurrent runs of the *same* account, register it twice under two names (`personal-a`, `personal-b`) — though this means re-`/login` for each. + +## Sync state between accounts + +`ckipper account sync` copies state between registered accounts — MCP servers, settings, agents, commands, skills, user hooks, etc. — interactively by default, with one source and one or more destinations. + +### Syncable types + +| Type | What it covers | +|---|---| +| `mcp` | `.claude.json` `.mcpServers` (per server) | +| `settings` | `settings.json` top-level + nested keys (excludes `.hooks`) | +| `claude-md` | `CLAUDE.md` (user memory) | +| `agents` | `agents/*.md` | +| `commands` | `commands/*.md` | +| `output-styles` | `output_styles/*.md` | +| `skills` | `skills//` (per-directory; symlinks preserved) | +| `statusline` | `settings.json` `.statusLine` + referenced script (if internal to the account dir) | +| `hooks` | User-written hooks under `/hooks/` (filtered against the install allowlist) + paired `settings.json` `.hooks` entries | +| `prefs` | Account preferences in `accounts.json` (`always_docker`, `always_firewall`, `ssh_forward`) | + +Plugins are not a separate type — sync `enabledPlugins` + `extraKnownMarketplaces` (both in the `settings` type) and Claude Code re-fetches the plugins on the destination's next launch. + +### Common commands + +```sh +# Full interactive wizard — picks source, targets, and types +ckipper account sync + +# One-shot single type +ckipper account sync personal work --include mcp + +# Bundle: every user-customization (no prefs) +ckipper account sync personal work --include customizations + +# Multi-destination +ckipper account sync personal work client1 client2 --include all + +# Dry run (summary only, no writes) +ckipper account sync personal work --include all --dry-run + +# Apply without confirm prompt (for scripting) +ckipper account sync personal work --include all --yes ``` -`` is a relative path under `~/Developer/` (e.g. `Whmoro/orderguard`, `Vibma`). Tab completion is included. - -## What's In the Container - -- Node.js 24, git, gh CLI, ripgrep, curl, jq, python3, tmux -- Chromium headless (Playwright/Puppeteer) with `--no-sandbox` -- Claude Code (native installer) -- `uv`/`uvx` for Python-based MCP servers -- `bun`/`bunx` for fast JS runtime and statusline commands -- `iptables-legacy` for optional egress firewall -- Non-root `claude` user - -## What the Entrypoint Does - -On every container start, `entrypoint.sh` automatically: - -1. **Copies `.claude.json`** from read-only staging mount to writable location (prevents race condition with host) -2. **Copies and sanitizes SSH config** from read-only `.ssh-host` staging mount — strips macOS-specific `UseKeychain` option that breaks Linux OpenSSH -3. **Disables Chrome extension checks** via jq (no browser in container) -4. **Writes OAuth credentials** from `CLAUDE_CREDENTIALS` env var to `.credentials.json` -5. **Sets git identity** (`user.name` / `user.email`) from `.claude.json` account info -6. **Disables GPG signing** via `GIT_CONFIG_COUNT` environment variables (no GPG key in container). Uses env vars instead of `git config` so the host's `.git/config` is never modified — the overrides disappear when the container exits -7. **Authenticates `gh` CLI** — unsets `GH_TOKEN`, runs `gh auth login --with-token`, then `gh auth setup-git` (enables `git push` over HTTPS) -8. **Enables egress firewall** if `ENABLE_FIREWALL=1` -9. **Sets `TURBO_CACHE_DIR`** to `/workspace/.turbo/cache` (worktree git root points to unwritable host path) -10. **Forces truecolor statusline** — creates a `bunx` wrapper that injects `FORCE_COLOR=3` (Claude Code doesn't pass it to subprocesses) -11. **Reinstalls native binaries** — `npm install --prefer-offline` replaces macOS binaries (rollup, biome, esbuild, swc) with Linux versions -12. **Fixes volume permissions** — runs `chown` on named volumes that may retain stale UIDs from older image builds -13. **Pre-installs uvx-based MCP servers** — parses `.claude.json` for MCP servers that use `uvx`, pre-installs them with `uv tool install`, and rewrites the config to invoke the installed binary directly (avoids Claude's MCP startup timeout) -14. **Clears credential env vars** (`unset CLAUDE_CREDENTIALS GH_TOKEN`) before launching the command -15. **Runs the specified command** — `claude --dangerously-skip-permissions` if `claude` was passed, otherwise drops to an interactive bash shell +### Named bundles + +| Bundle | Resolves to | +|---|---| +| `all` | every type | +| `customizations` | mcp, settings, claude-md, agents, commands, output-styles, skills, statusline, hooks | +| `claude-config` | mcp, settings, hooks | +| `preferences` | prefs | + +### Safeguards & undo + +Every destructive write is preceded by a copy to `/.ckipper-sync-backups/-from-/`. The summary table prints the backup directory path before applying. To restore: + +```sh +ckipper account sync undo work # restore most recent backup +ckipper account sync undo work --pick # gum-pick from backup ledger +ckipper account sync undo work --list # print backup directory paths +``` + +Sync refuses to write when Claude is running with the destination's config dir (override with `--force` if you understand the risk). + +### Sync vs. redeploy-hooks + +These two commands sound similar but do different things: + +- **`ckipper account sync ... --include hooks`** — peer-to-peer copy of *user-written* hooks (any hook file in `/hooks/` whose filename does NOT match a ckipper-managed install hook). Includes the paired `settings.json` `.hooks` entry. +- **`ckipper account redeploy-hooks`** — pushes the ckipper safety hooks (`bash-guardrails`, `protect-claude-config`, `docker-context`, `notify-bell`) from `~/.ckipper/hooks/` to every registered account. Run after editing a script in the install dir. ## Security -### Docker Isolation +### Docker isolation Claude **cannot**: access files outside the worktree, reach your Documents/Desktop/other projects, install system packages, persist processes after exit, create other Docker containers, or access your LAN (ports bound to `127.0.0.1` only). -### Safety Hooks (Docker-only, no-op on host) +**What Claude *can* do inside the container:** + +- Full read/write on the worktree (`/workspace`). +- Read/write on the parent repo's `.git/` directory (so commits, fetches, and branch ops work). +- Read/write on the active per-account dir (`~/.claude-/`). +- Read a copy of your `~/.ssh` contents — staged read-only at `~/.ssh-host` and copied into the container's tmpfs at `~/.ssh` on startup, **only when `ssh_forward` is enabled** for the account. The `ssh_forward` per-account preference toggles whether `~/.ssh` is mounted into containers; set it to `false` for accounts that don't need git push over SSH. The copy lives only in container RAM and disappears when the container exits. +- Use the host's SSH agent (forwarded via `/run/host-services/ssh-auth.sock`, also gated by `ssh_forward`) and a `gh`-authenticated session for HTTPS pushes. +- Outbound network — unrestricted by default; default-deny with a domain whitelist when `--firewall` is set. + +### Prompt injection: the agent inside is still a target -Four Claude Code hooks activate inside Docker: +Container isolation contains accidents and outright host-level escalation; it does not stop a successful prompt-injection attack from doing damage *with* Claude's legitimate access. An attacker-controlled input — a poisoned README in a dependency, a hostile MCP server, a fetched URL, a GitHub issue body that Claude reads, a malicious commit message — can try to convince Claude to act against you using the access it already has. That includes writing files anywhere in the worktree, committing and pushing to your remote (the SSH agent is forwarded and `gh` is authenticated), reading the in-container copy of `~/.ssh` and the per-account `.claude.json`, and sending data outbound to any whitelisted domain. Treat untrusted inputs accordingly: be cautious about what repos you point Claude at, what MCP servers you install, and what URLs you ask it to fetch. The `--firewall` mode meaningfully shrinks the exfiltration surface but does not eliminate it (GitHub itself is whitelisted). + +### Safety hooks (Docker-only, no-op on host) + +These hooks are UX guardrails, not a security boundary — the container, the optional egress firewall, and the absence of host write access are the actual isolation. Four Claude Code hooks activate inside Docker: 1. **Config Protection** (`protect-claude-config.sh`) — Blocks Edit/Write to Claude config files (settings.json, hooks, plugins, etc.) that could execute code on the host -2. **Bash Guardrails** (`bash-guardrails.sh`) — Blocks destructive commands: +2. **Bash Guardrails** (`bash-guardrails.sh`) — Blocks destructive commands run via the Bash tool (the Read tool is not hooked, so this is not a defense against direct file reads): - `rm -rf` (except build artifacts like `node_modules`, `dist`, `.next`) - `git push --force` (suggests `--force-with-lease`) - `git reset --hard` (suggests `git stash`) - Writing to `.git/hooks/` or `.git/config` (these execute on the host) - Recursive `chmod`/`chown` - - Reading SSH keys or credential files directly + - Reading SSH keys or credential files via shell commands like `cat`/`cp`/`base64` (Bash-tool only — the Read tool is not hooked, so this catches scripted exfiltration, not direct reads) - Modifying Claude config files via shell 3. **Context Injection** (`docker-context.sh`) — Tells Claude the safety rules at startup so it avoids triggering guardrails -4. **Notification Bell** (`notify-bell.sh`) — Sends a terminal bell character (`\a`) on Claude Code notification events, which passes through Docker's TTY to the host terminal. Triggers native notifications (dock bounce, sound) in Ghostty, iTerm2, Warp, and other terminals that support terminal bell +4. **Notification Bell** (`notify-bell.sh`) — Sends a terminal bell character (`\a`) on Claude Code notification events. Triggers native notifications (dock bounce, sound) in Ghostty, iTerm2, Warp, and other terminals that support terminal bell. Toggle install via `CKIPPER_NOTIFY_BELL`. -### Additional Security +### Additional security -- `core.hooksPath` set globally to `~/.git-hooks` — git ignores `.git/hooks/` so planted hooks can't execute on host +- `core.hooksPath` set globally to `~/.git-hooks` — git ignores `.git/hooks/` so planted hooks can't execute on host. `install.sh` only sets this if you don't already have a different value (so husky/pre-commit/etc. are preserved); the global setting is overridable by per-repo config, so it isn't an absolute backstop. - GPG signing disabled via `GIT_CONFIG_COUNT` env vars — no file modification, overrides both local and global config, disappears when container exits - Post-session `.git/config` tamper detection - Credentials cleared from environment before launching the command (invisible to `env` and `/proc/self/environ`) -- `.claude.json` mounted read-only as staging copy (prevents race condition with host) -- SSH config mounted read-only as staging copy (`.ssh-host`), copied and sanitized by entrypoint — macOS-specific `UseKeychain` stripped -- SSH agent forwarded from host via Docker Desktop socket (`/run/host-services/ssh-auth.sock`) — no private keys copied into container -- `~/.claude` dual-mounted at both `/home/claude/.claude` and the host path (e.g. `/Users//.claude`) so plugins with hardcoded absolute paths resolve correctly +- Per-account `.claude.json` is bind-mounted RW; container mutations propagate to the host file (intentional, gated by the same-account-twice advisory) +- SSH config mounted read-only as staging copy (`.ssh-host`), copied and sanitized by entrypoint — macOS-specific `UseKeychain` stripped. Gated by the per-account `ssh_forward` preference. +- Per-account `~/.claude-` mounted at the same host path inside the container so plugins with hardcoded absolute paths resolve correctly - No Docker socket mounted (cannot create sibling containers) -### Optional Egress Firewall +### Optional egress firewall ```bash -w myorg/myapp feature-x --docker --firewall claude +ck run myorg/myapp feature-x --docker --firewall ``` Default-deny iptables firewall that only allows outbound traffic to whitelisted domains. Uses `iptables-legacy` (Docker Desktop doesn't support `nf_tables`). DNS auto-detected from `/etc/resolv.conf`. Blocked requests silently drop (~60s timeout). Default whitelist: Anthropic API, GitHub, npm, PyPI, Sentry, and common MCP services (Atlassian, Clerk, Figma, ClickUp, Context7, Google Fonts). Edit `docker/init-firewall.sh` to customize. -## MCP Support +Default-deny applies to IPv4. Container IPv6 is off by default in Docker Desktop; if you've enabled it, keep it disabled when running with `--firewall` until IPv6 default-deny is in place. + +### macOS Keychain authentication + +On macOS, Claude Code stores OAuth credentials in the macOS Keychain (service: `Claude Code-credentials`) and actively deletes the on-disk credentials file. `ckipper run` extracts credentials from Keychain at launch and passes them to the container via environment variable. The entrypoint writes them to disk, authenticates `gh` CLI, then clears the env vars before starting Claude. + +Tokens are short-lived (~6 hours). If they expire mid-session, exit the container, run any `claude` command on the host (refreshes the token), then restart. + +## MCP support | MCP Server | Type | Works? | |---|---|---| @@ -124,7 +284,9 @@ Default whitelist: Anthropic API, GitHub, npm, PyPI, Sentry, and common MCP serv | MCPs with local files | node/uvx (mounted ro) | Yes (add mount) | | Docker-based MCPs | Docker-in-Docker | No (security) | -For MCPs that reference local files, add entries to `W_EXTRA_VOLUMES` in `~/.claude/docker/w-config.zsh`. Mount at the exact same host path so MCP configs work unchanged. +> **Supply-chain note:** an MCP server is third-party code that runs inside the container with the same access Claude has — full RW on the worktree, the per-account `.claude.json`, and outbound network. Pin versions where the registry supports it, audit servers before adding them, and remove servers you no longer use. + +For MCPs that reference local files, add entries to `CKIPPER_EXTRA_VOLUMES` in `~/.ckipper/docker/ckipper-config.zsh`. Mount at the exact same host path so MCP configs work unchanged. Two named Docker volumes support uvx-based MCP servers: - **`claude-uv-cache`** — persists the uv package cache (downloaded wheels, git clones) across container restarts @@ -132,164 +294,134 @@ Two named Docker volumes support uvx-based MCP servers: The entrypoint pre-installs uvx-based MCP servers before Claude starts and rewrites the container's `.claude.json` to invoke the installed binary directly. This eliminates the network freshness check and ephemeral venv creation that cause intermittent MCP startup timeouts. -## Setup +## Customization -### Prerequisites +All schema-managed settings are reachable via `ck config get/set/unset/list/edit` or the `ck setup` wizard. Non-schema items in `~/.ckipper/docker/ckipper-config.zsh` (`CKIPPER_EXTRA_VOLUMES`, `CKIPPER_EXTRA_ENV`) and the firewall whitelist (`docker/init-firewall.sh`) are still edited by hand. -- **macOS** with zsh -- **Docker Desktop** installed and running -- **Claude Code** installed and authenticated (`claude` command works) -- **GitHub auth**: SSH keys added to your SSH agent, or `gh auth login` on host -- **jq** installed (`brew install jq`) +## Updating -### Option 1: Manual Install +### Update the host-side install (Ckipper itself) ```bash -# Clone the repo -git clone https://github.com/whmoro/claude-docker-sandbox.git -cd claude-docker-sandbox - -# Run the installer (copies all files, merges hooks, adds source line) -./install.sh - -# Customize your config -# Edit ~/.claude/docker/w-config.zsh with your MCP mounts, ports, etc. - -# Build the Docker image (takes a few minutes first time) +cd /path/to/Ckipper +git pull +./install.sh # or: make install source ~/.zshrc -w --rebuild-image - -# Test it -w test-branch --docker claude ``` -### Option 2: Let Claude Do It - -Clone the repo, then open Claude Code and paste this prompt: +`install.sh` is idempotent. It re-deploys `~/.ckipper/docker/` (entry scripts, Dockerfile, entrypoint, `lib/` tree) and `~/.ckipper/hooks/`. Your `accounts.json`, `aliases.zsh`, and `ckipper-config.zsh` are preserved. **After updating, run `ckipper setup` to pick up new schema keys.** -> Read the README.md in this repo and run `./install.sh`. Then run `source ~/.zshrc && w --rebuild-image` and tell me when it's ready to test. Show me what's in `~/.claude/docker/w-config.zsh` so I can customize it. +### Update the container -### What Gets Installed Where +```bash +ck wt rebuild-image +``` -| Source | Destination | Purpose | -|---|---|---| -| `docker/Dockerfile` | `~/.claude/docker/Dockerfile` | Docker image definition | -| `docker/entrypoint.sh` | `~/.claude/docker/entrypoint.sh` | Container startup + environment setup | -| `docker/init-firewall.sh` | `~/.claude/docker/init-firewall.sh` | Egress firewall | -| `hooks/protect-claude-config.sh` | `~/.claude/hooks/protect-claude-config.sh` | Edit/Write guard | -| `hooks/bash-guardrails.sh` | `~/.claude/hooks/bash-guardrails.sh` | Bash command guard | -| `hooks/docker-context.sh` | `~/.claude/hooks/docker-context.sh` | Context injection | -| `hooks/notify-bell.sh` | `~/.claude/hooks/notify-bell.sh` | Notification bell | -| `w-function.zsh` | `~/.claude/docker/w-function.zsh` | w() function (sourced by .zshrc) | -| `w-config.zsh.example` | `~/.claude/docker/w-config.zsh` | User config (ports, mounts, env vars) | -| `settings-hooks.json` | Auto-merged into `~/.claude/settings.json` | Hook registration | +Updates everything in the container — system packages, Claude Code, uv/uvx, bun, gh CLI, and Chromium. The build cache-busts all layers so nothing goes stale. Only the base image (`node:24-slim`) is cached; pull it manually with `docker pull node:24-slim` if needed. -### macOS Keychain Authentication +To clear stale uv/MCP caches (e.g., after permission errors or broken tool installs): -On macOS, Claude Code stores OAuth credentials in the macOS Keychain (service: `Claude Code-credentials`) and actively deletes the on-disk credentials file. The `w()` function extracts credentials from Keychain at launch and passes them to the container via environment variable. The entrypoint writes them to disk, authenticates `gh` CLI, then clears the env vars before starting Claude. +```bash +docker volume rm claude-uv-cache claude-uv-tools +``` -Tokens are short-lived (~6 hours). If they expire mid-session, exit the container, run any `claude` command on the host (refreshes the token), then restart. +The volumes are recreated automatically on the next container start. -## Testing +## Known limitations (Docker mode only) -After setup, run the comprehensive environment test to verify everything works: +These apply only when you launch Claude Code inside the Ckipper Docker container (`ck run ... --docker`). On the host, these features work normally. They cannot be fully resolved without upstream changes. -```bash -w test-branch --docker claude -``` +### OAuth token expiry across host and container -Then paste the contents of [`test-prompt.md`](test-prompt.md) into the Docker Claude session. It covers 11 sections: +Claude Code stores OAuth credentials in the macOS Keychain. When the container's Claude refreshes an expired token (~6 hours), the host's token is invalidated server-side. The refreshed token lives in container RAM (tmpfs) and cannot be written back to Keychain from Linux. If you run long container sessions, the host Claude will be logged out. Workaround: run `claude` on the host to re-authenticate. -- Entrypoint verification (env vars, git identity, Chrome disabled, Turbo cache, credential clearing from `/proc/self/environ`) -- File system access (read, write, delete, ownership, SSH staging mount, config sanitization) -- Code modification round-trip (Edit tool on mounted files) -- Git operations (status, log, branch, commit, SSH agent forwarding, gh CLI, HTTPS push via credential helper) -- Build tools (npm, biome, turbo, tmux, Chromium headless, uv/uvx, Python) -- Full project build -- Dev servers -- Tests and linting -- MCP and network access -- Safety hooks (4 blocked actions + guardrail bypass testing) -- Container isolation (non-root user, sudo restrictions, no Docker socket, setuid audit) +### Clipboard / image paste -See `test-prompt.md` for the full prompt and expected results table. +Ctrl+V image paste does not work inside the container. Claude Code uses `pbpaste` (macOS-only) to access the system clipboard, which doesn't exist in the Linux container. There is no standard mechanism for forwarding the macOS clipboard into a Docker container. OSC 52 terminal escape sequences can forward text clipboard but not images. -## Customization +### Voice mode (`/voice`) -### Firewall Domains +Voice mode requires microphone access, which is unavailable inside the container. Docker Desktop for Mac does not expose the host's microphone to containers. There is no equivalent of the SSH agent forwarding pattern for audio devices on macOS. -Edit `docker/init-firewall.sh` → `ALLOWED_DOMAINS` array, then `w --rebuild-image`. +### Claude in Chrome MCP -### Forwarded Ports +The [Claude in Chrome](https://www.anthropic.com/news/claude-for-chrome) browser extension cannot connect to Claude Code inside the container. The extension's discovery mechanism doesn't bridge the host/container boundary, so the extension reports "Browser extension is not connected." Reference: [#25506](https://github.com/anthropics/claude-code/issues/25506). Workaround: run Claude Code on the host (without `--docker`) when you need the Chrome extension. -Edit `W_PORTS` in `~/.claude/docker/w-config.zsh`. +### Custom system-sound hooks -### Base Branch +User-written hooks that shell out to macOS audio/AppleScript binaries (`afplay`, `say`, `osascript`) won't work inside the container — the binaries don't exist on Linux and there's no host audio device. Ckipper's built-in `notify-bell.sh` sidesteps this by emitting the terminal bell character (`\a`), which most modern terminals translate into a system notification. If you author a hook that needs richer host-only notifications, gate it on `[ ! -f /.dockerenv ]` (the same idiom every built-in safety hook uses) so it no-ops in the container. -Worktrees are created from `origin/develop`. Search for `develop` in `w-function.zsh` (or `~/.claude/docker/w-function.zsh` if deployed) and change to `main` or your default branch. +## Multi-account caveats (host and Docker) -### MCP Mounts +These apply to the multi-account model in general — they're upstream Claude Code behavior, not Ckipper bugs, and they apply equally on the host and inside Docker. Ckipper papers over some of them; others you should know about. -Add entries to `W_EXTRA_VOLUMES` in `~/.claude/docker/w-config.zsh`. Format: `"host_path:container_path:mode"`. +### OAuth refresh token races (upstream) -### Statusline +Two concurrent Claude Code sessions on the same account share a single-use OAuth refresh token. The first to refresh wins; the second gets a 404 and loses authentication. Symptoms: frequent `/login` prompts, lost sessions. References: [#24317](https://github.com/anthropics/claude-code/issues/24317), [#27933](https://github.com/anthropics/claude-code/issues/27933). **Workaround:** different accounts in different terminals (the model Ckipper is built around). -If you use a custom statusline (like [ccstatusline](https://github.com/sirmalloc/ccstatusline)), add the config and cache mounts to `W_EXTRA_VOLUMES` in `~/.claude/docker/w-config.zsh`: -- **Config mount** (`~/.config/ccstatusline`, read-only) — theme, widget layout, powerline settings -- **Cache mount** (`~/.cache/ccstatusline`, read-write) — shares usage API cache with host to avoid 429 rate limits +### Credentials silently wiped on failed refresh (upstream) -The `bun` runtime is included in the container image. The entrypoint creates a `bunx` wrapper that injects `FORCE_COLOR=3` for truecolor statusline output (Claude Code doesn't pass this to subprocesses). +If a token refresh fails mid-flight (network blip, server error), Claude Code may overwrite the stored credentials with an empty value rather than preserving the old one. Reference: [#29896](https://github.com/anthropics/claude-code/issues/29896). **Recovery:** `claude- /login` again. -## Updating +### Keychain permission glitches after macOS updates (upstream) -Run `w --rebuild-image` to update everything in the container — system packages, Claude Code, uv/uvx, bun, gh CLI, and Chromium. The build cache-busts all layers so nothing goes stale. Only the base image (`node:24-slim`) is cached; pull it manually with `docker pull node:24-slim` if needed. +After macOS or Claude Code updates, the Keychain entry can become inaccessible to Claude Code, forcing manual re-`/login` 1–N times per day. Reference: [#19456](https://github.com/anthropics/claude-code/issues/19456). Independent of Ckipper. -To clear stale uv/MCP caches (e.g., after permission errors or broken tool installs): +### Project-level files are SHARED across accounts (by design) -```bash -docker volume rm claude-uv-cache claude-uv-tools -``` +Files inside a project repo are *not* governed by `CLAUDE_CONFIG_DIR`: -The volumes are recreated automatically on the next container start. +- `/.claude/settings.json` (committed) +- `/.claude/settings.local.json` (gitignored) +- `/.mcp.json` (committed, project-scoped MCP servers) +- `/CLAUDE.md` -## Known Limitations +This is usually a feature — your `personal` and `work` accounts working in the same repo see the same project rules and project MCPs. If you don't want that, accounts must work in separate worktrees or separate clones. -These are inherent to running Claude Code inside a Docker container on macOS and cannot be fully resolved without upstream changes. +### MCP servers are per-account (user-scoped only) -### OAuth Token Expiry Across Host and Container +`mcpServers` lives in each account's `.claude.json`. When you `ckipper account add `, the new account starts with **zero** user-scoped MCP servers. Use `ckipper account sync --include mcp` (or run the wizard with no args) to copy them — see [Sync state between accounts](#sync-state-between-accounts). -Claude Code stores OAuth credentials in the macOS Keychain. When the container's Claude refreshes an expired token (~6 hours), the host's token is invalidated server-side. The refreshed token lives in container RAM (tmpfs) and cannot be written back to Keychain from Linux. If you run long container sessions, the host Claude will be logged out. Workaround: run `claude` on the host to re-authenticate. +### Plugins and marketplaces are per-account -### Clipboard / Image Paste +`enabledPlugins` and `extraKnownMarketplaces` (in `settings.json`) are per-account. The `~/.ckipper/plugins/known_marketplaces.json` cache is independent per account dir. -Ctrl+V image paste does not work inside the container. Claude Code uses `pbpaste` (macOS-only) to access the system clipboard, which doesn't exist in the Linux container. There is no standard mechanism for forwarding the macOS clipboard into a Docker container. OSC 52 terminal escape sequences can forward text clipboard but not images. +### Diagnose anytime -### Voice Mode (`/voice`) - -Voice mode requires microphone access, which is unavailable inside the container. Docker Desktop for Mac does not expose the host's microphone to containers. There is no equivalent of the SSH agent forwarding pattern for audio devices on macOS. +`ckipper doctor` runs a full health check: registry validity, account dir presence, `.claude.json`/`settings.json`/`hooks/` per-account, Keychain entries, `~/.zshrc` source lines, schema validation against `ckipper-config.zsh`, and `accounts.json` v2 preferences shape. Run it whenever something looks off; pass `--fix` for guided in-place repair. ## Troubleshooting | Problem | Fix | |---|---| | Docker not running | Start Docker Desktop | -| Image not found | `w --rebuild-image` | +| Image not found | `ck wt rebuild-image` | | "Not logged in" in container | Run `claude` on host to refresh Keychain, restart | | "Could not extract credentials" | Run `/login` on host | | Credentials expired mid-session | Exit, run `claude` on host, restart | | Port conflict | Busy ports auto-skipped | | Firewall blocking needed domain | Add to `init-firewall.sh`, rebuild | -| Hook not blocking | Check `settings.json` uses `$HOME/` paths | +| Hook not blocking | Run `ckipper doctor --fix` | | GitHub MCP failed | Expected — Docker-in-Docker disabled | | `gh` commands fail | Check GH_TOKEN extracted from `.claude.json` | | `git commit` fails (no identity) | Entrypoint should set this automatically; check `.claude.json` has `oauthAccount` | -| Native binary errors (Exec format) | Run `w --rebuild-image` — entrypoint runs `npm install` to fix platform binaries | -| Turbo cache permission denied | Entrypoint sets `TURBO_CACHE_DIR`; run `w --rebuild-image` if missing | -| Branch already checked out | Switch main repo to different branch: `cd ~/Developer/ && git checkout develop` | -| Stale worktree directory | Remove manually: `rm -rf ~/Developer/.worktrees//` | -| Statusline not rendering correctly | Add ccstatusline mounts to `W_EXTRA_VOLUMES` in `~/.claude/docker/w-config.zsh`; ensure `bun` is in the image (`w --rebuild-image`) | -| `git push` fails (SSH permission denied) | Ensure SSH keys are added to your agent (`ssh-add -l` to check); Docker Desktop forwards the host's SSH agent automatically | +| Native binary errors (Exec format) | Run `ck wt rebuild-image` — entrypoint runs `npm install` to fix platform binaries | +| Branch already checked out | Switch main repo to different branch: `cd $CKIPPER_PROJECTS_DIR/ && git checkout develop` | +| Stale worktree directory | `ck wt rm ` | +| `git push` fails (SSH permission denied) | Ensure SSH keys are added to your agent (`ssh-add -l` to check) and `ssh_forward` is enabled for the active account | | GPG signing issues in container | Handled automatically via `GIT_CONFIG_COUNT` env vars; host config is not modified | -| `.env.local` not copied to worktree | Fixed: worktree creation now copies all `.env*` files except `.env.example` | -| uvx MCP server fails to start | Run `w --rebuild-image`; if still broken, delete stale volumes: `docker volume rm claude-uv-cache claude-uv-tools` | -| Claude Code version outdated | Run `w --rebuild-image` — Claude and uv are always re-fetched | +| `.env.local` not copied to worktree | Worktree creation copies all `.env*` files except `.env.example` | +| uvx MCP server fails to start | Run `ck wt rebuild-image`; if still broken, delete stale volumes: `docker volume rm claude-uv-cache claude-uv-tools` | +| Claude Code version outdated | Run `ck wt rebuild-image` — Claude and uv are always re-fetched | + +## Contributing + +PRs welcome. See [`CONTRIBUTING.md`](CONTRIBUTING.md) for the workflow, code style, and how to run the test suite (`make bootstrap && make test`). + +## Security disclosure + +Found a vulnerability? See [`SECURITY.md`](SECURITY.md) for private reporting. Please do not open a public issue. + +## License + +MIT — see [`LICENSE`](LICENSE). diff --git a/SECURITY.md b/SECURITY.md new file mode 100644 index 0000000..e1041e6 --- /dev/null +++ b/SECURITY.md @@ -0,0 +1,25 @@ +# Security Policy + +## Reporting a vulnerability + +Please do **NOT** open a public GitHub issue for security vulnerabilities. + +Instead, report privately via [GitHub Security Advisories](https://github.com/mswdev/Ckipper/security/advisories/new). + +We aim to acknowledge reports within 72 hours and to provide a fix or mitigation timeline within 7 days. + +## Scope + +In scope: +- Credential handling (Keychain access, container injection, file permissions). +- Hook bypass where the bypass affects security (note that `bash-guardrails.sh` is a UX guardrail by design, not a security boundary). +- Container escape vectors in the Docker sandbox. +- Path traversal or shell injection in CLI tools. + +Out of scope: +- Bypasses of `bash-guardrails.sh` via `bash -c`/heredocs/eval — this is documented, intentional, and not a security boundary. +- Issues requiring physical access to the host. + +## Supported versions + +This is a single-version-stream project. The latest release on `main` is the only supported version. diff --git a/ckipper.zsh b/ckipper.zsh new file mode 100644 index 0000000..2000ee0 --- /dev/null +++ b/ckipper.zsh @@ -0,0 +1,433 @@ +#!/usr/bin/env zsh +# Ckipper main dispatcher. +# +# Sources shared primitives from lib/core/, account-management subcommands +# from lib/account/, and worktree subcommands from lib/worktree/, then +# exposes the top-level `ckipper` command (with `ck` short alias). + +# Ckipper (pronounced "skipper") — multi-account Claude Code manager +# Sourced from ~/.zshrc. + +CKIPPER_DIR="${CKIPPER_DIR:-$HOME/.ckipper}" +CKIPPER_REGISTRY="$CKIPPER_DIR/accounts.json" +CKIPPER_REGISTRY_VERSION=2 + +CKIPPER_REPO_DIR="${0:A:h}" + +# Core primitives +source "$CKIPPER_REPO_DIR/lib/core/utils.zsh" +source "$CKIPPER_REPO_DIR/lib/core/registry.zsh" +source "$CKIPPER_REPO_DIR/lib/core/keychain.zsh" +source "$CKIPPER_REPO_DIR/lib/core/fuzzy.zsh" +source "$CKIPPER_REPO_DIR/lib/core/schema.zsh" +source "$CKIPPER_REPO_DIR/lib/core/config.zsh" +source "$CKIPPER_REPO_DIR/lib/core/style.zsh" +source "$CKIPPER_REPO_DIR/lib/core/help.zsh" +source "$CKIPPER_REPO_DIR/lib/core/prompt.zsh" + +# Account-namespace modules +source "$CKIPPER_REPO_DIR/lib/account/account-management.zsh" +source "$CKIPPER_REPO_DIR/lib/account/cleanup.zsh" +source "$CKIPPER_REPO_DIR/lib/account/aliases.zsh" +# Sync subsystem — registry, backup, shared helpers, engine, preview, +# interactive, dispatcher, then the strategy modules. +source "$CKIPPER_REPO_DIR/lib/account/sync/registry.zsh" +source "$CKIPPER_REPO_DIR/lib/account/sync/backup.zsh" +source "$CKIPPER_REPO_DIR/lib/account/sync/_shared.zsh" +source "$CKIPPER_REPO_DIR/lib/account/sync/engine.zsh" +source "$CKIPPER_REPO_DIR/lib/account/sync/preview.zsh" +source "$CKIPPER_REPO_DIR/lib/account/sync/interactive.zsh" +source "$CKIPPER_REPO_DIR/lib/account/sync/dispatcher.zsh" +source "$CKIPPER_REPO_DIR/lib/account/sync/strategies/structured.zsh" +source "$CKIPPER_REPO_DIR/lib/account/sync/strategies/files_flat.zsh" +source "$CKIPPER_REPO_DIR/lib/account/sync/strategies/files_dir.zsh" +source "$CKIPPER_REPO_DIR/lib/account/sync/strategies/statusline.zsh" +source "$CKIPPER_REPO_DIR/lib/account/sync/strategies/hooks.zsh" +source "$CKIPPER_REPO_DIR/lib/account/doctor.zsh" +source "$CKIPPER_REPO_DIR/lib/account/dispatcher.zsh" + +# Worktree-namespace modules +source "$CKIPPER_REPO_DIR/lib/worktree/dispatcher.zsh" +source "$CKIPPER_REPO_DIR/lib/worktree/args.zsh" +source "$CKIPPER_REPO_DIR/lib/worktree/run.zsh" +source "$CKIPPER_REPO_DIR/lib/worktree/build-image.zsh" +source "$CKIPPER_REPO_DIR/lib/worktree/normal-mode.zsh" +source "$CKIPPER_REPO_DIR/lib/worktree/docker-mode.zsh" +source "$CKIPPER_REPO_DIR/lib/worktree/ports.zsh" +source "$CKIPPER_REPO_DIR/lib/worktree/resolve-account.zsh" +source "$CKIPPER_REPO_DIR/lib/worktree/worktree.zsh" + +# Config-namespace modules +source "$CKIPPER_REPO_DIR/lib/config/get.zsh" +source "$CKIPPER_REPO_DIR/lib/config/set.zsh" +source "$CKIPPER_REPO_DIR/lib/config/unset.zsh" +source "$CKIPPER_REPO_DIR/lib/config/list.zsh" +source "$CKIPPER_REPO_DIR/lib/config/edit.zsh" +source "$CKIPPER_REPO_DIR/lib/config/dispatcher.zsh" + +# Setup-namespace modules +source "$CKIPPER_REPO_DIR/lib/setup/prereqs.zsh" +source "$CKIPPER_REPO_DIR/lib/setup/prompts.zsh" +source "$CKIPPER_REPO_DIR/lib/setup/apply.zsh" +source "$CKIPPER_REPO_DIR/lib/setup/dispatcher.zsh" + +# Top-level shortcuts (run / launcher) +source "$CKIPPER_REPO_DIR/lib/run/dispatcher.zsh" +source "$CKIPPER_REPO_DIR/lib/launcher/menu.zsh" + +# User config (projects/worktrees dirs, ports, extra volumes, extra env vars). +# Renamed from w-config.zsh in the merge; install.sh handles the migration. +_ckipper_user_config="${CKIPPER_DIR:-$HOME/.ckipper}/docker/ckipper-config.zsh" +[[ -f "$_ckipper_user_config" ]] && source "$_ckipper_user_config" +unset _ckipper_user_config + +# Defaults if user config is missing or incomplete. Set once at source time +# and never reset per-call so users can host their projects anywhere. +CKIPPER_PROJECTS_DIR="${CKIPPER_PROJECTS_DIR:-$HOME/Developer}" +CKIPPER_WORKTREES_DIR="${CKIPPER_WORKTREES_DIR:-$CKIPPER_PROJECTS_DIR/.worktrees}" +(( ${#CKIPPER_PORTS[@]} == 0 )) && CKIPPER_PORTS=(3000) +(( ${#CKIPPER_EXTRA_VOLUMES[@]} == 0 )) && CKIPPER_EXTRA_VOLUMES=() +(( ${#CKIPPER_EXTRA_ENV[@]} == 0 )) && CKIPPER_EXTRA_ENV=() + +# Top-level commands. Used both for routing and for fuzzy-suggest. +_CKIPPER_COMMANDS=(account worktree run config setup doctor help) + +# Pre-merge top-level commands → their post-merge namespaced replacement. +# Used by _ckipper_unknown so a user typing the old form (e.g. `ckipper add`) +# gets a precise migration hint instead of a generic "Unknown command" — +# Levenshtein cannot bridge the rename (lev(add, account) = 6, well above the +# fuzzy threshold). Empty value means the command was removed entirely. +typeset -gA _CKIPPER_LEGACY_COMMANDS=( + [add]='account add' + [list]='account list' + [default]='account default' + [remove]='account remove' + [rename]='account rename' + [sync]='account sync' + [sync-hooks]='account redeploy-hooks' + [repair-plugins]='doctor --fix' + [migrate]='' +) + +# Dispatch a top-level ckipper command. +# +# Args: +# $1 — top-level command (account, worktree, config, setup, doctor, +# help, -h, --help, empty, or short alias acct/wt) +# $2..$N — arguments forwarded to the namespace dispatcher +# +# Returns: 0 on success; 1 on unknown command. +# +# Errors (stderr): +# "Unknown command: ''. Did you mean: ''? ..." +ckipper() { + local cmd="$1" + shift 2>/dev/null + case "$cmd" in + acct) cmd="account" ;; + wt) cmd="worktree" ;; + esac + case "$cmd" in + account) _ckipper_account_dispatch "$@" ;; + worktree) _ckipper_worktree_dispatch "$@" ;; + run) _ckipper_run "$@" ;; + config) _ckipper_config_dispatch "$@" ;; + setup) _ckipper_setup "$@" ;; + doctor) + if [[ "$1" == "--help" || "$1" == "-h" ]]; then + _ckipper_help_text_doctor + return 0 + fi + _ckipper_doctor "$@" + ;; + "") _ckipper_launcher_menu ;; + help|-h|--help) _ckipper_help ;; + *) _ckipper_unknown "$cmd"; return 1 ;; + esac +} + +# Print a migration hint for retired pre-merge commands, or fall through to +# the standard unknown-command + fuzzy-suggest path. Always writes to stderr. +# +# Args: $1 — the unknown command the user typed. +# Returns: 0 always. +_ckipper_unknown() { + local cmd="$1" + if (( ${+_CKIPPER_LEGACY_COMMANDS[$cmd]} )); then + local replacement="${_CKIPPER_LEGACY_COMMANDS[$cmd]}" + if [[ -n "$replacement" ]]; then + echo "'ckipper $cmd' was renamed to 'ckipper $replacement' — pass the same arguments." >&2 + else + echo "'ckipper $cmd' was removed in this release." >&2 + fi + echo "Run 'ckipper help' for the current command list." >&2 + return 0 + fi + _core_unknown_command "$cmd" \ + "Run 'ckipper help' for available commands." \ + "${_CKIPPER_COMMANDS[@]}" +} + +# Print the top-level ckipper usage summary. +# +# Returns: 0 always. +_ckipper_help() { + _core_help_render 'ckipper (pronounced "skipper") — multi-account Claude Code manager' \ + "" \ + "Usage:" \ + " ckipper account Manage Claude accounts (alias: acct)" \ + " ckipper worktree Manage git worktrees (alias: wt)" \ + " ckipper run Shortcut for \`ckipper worktree run\`" \ + " ckipper config View and modify Ckipper settings" \ + " ckipper setup Run / re-run the interactive setup wizard" \ + " ckipper doctor Diagnostic check of accounts and tooling" \ + " ckipper help Show this overview" \ + "" \ + "Companion commands (sourced via aliases.zsh):" \ + " claude- [args...] Auto-generated launcher per registered account" \ + " [args...] Bare-name shortcut (skipped if it would shadow" \ + " an existing command, builtin, alias, or word)" \ + "" \ + "Run \`ckipper help\` (e.g. \`ckipper account help\`) for the" \ + "subcommand list, and \`ckipper --help\` for per-" \ + "subcommand details." \ + "" \ + "Short alias: \`ck\` is the same as \`ckipper\`." +} + +# Print help text for the top-level `doctor` command. +# +# Returns: 0 always. +_ckipper_help_text_doctor() { + _core_help_render "ckipper doctor" \ + "" \ + "Run a diagnostic checklist on registered accounts and ckipper tooling:" \ + " - Registry validity (version, JSON shape)" \ + " - Per-account: config dir presence, .claude.json/settings.json/hooks/" \ + " - Keychain entries reachable on macOS" \ + " - ~/.zshrc sources ckipper.zsh" \ + " - Stub ~/.claude state is absent" \ + "" \ + "Exits 0 if every check passes (or only INFOs/WARNs); exits 1 if any FAIL." +} + +# Short alias: 'ck' for 'ckipper'. +ck() { ckipper "$@"; } + +# ── Tab completion ─────────────────────────────────────────────────── +# Ensure completions directory is in fpath. +[[ -d ~/.zsh/completions ]] || mkdir -p ~/.zsh/completions +fpath=(~/.zsh/completions $fpath) + +# Bump this when the heredoc body below changes so existing installs +# regenerate the cached completion file. The version is embedded as a literal +# comment in the generated file and matched here. +CKIPPER_COMPLETION_VERSION=8 +if [[ ! -f ~/.zsh/completions/_ckipper ]] \ + || ! grep -q "# ckipper-completion-version=$CKIPPER_COMPLETION_VERSION" ~/.zsh/completions/_ckipper 2>/dev/null; then + # Note: `_ckipper()` below is a zsh tab-completion definition embedded in + # a heredoc. It uses zsh's _arguments DSL and must remain a single + # function for tab completion to work. The 25-line cap in code-style.md + # does not apply to zsh completion definitions (this is data written to + # a completion file, not maintained shell logic). + cat > ~/.zsh/completions/_ckipper << 'COMPEOF' +#compdef ckipper ck +# ckipper-completion-version=8 + +_ckipper() { + local projects_dir="${CKIPPER_PROJECTS_DIR:-$HOME/Developer}" + local worktrees_dir="${CKIPPER_WORKTREES_DIR:-$projects_dir/.worktrees}" + local -a top_commands account_subs worktree_subs config_subs + + top_commands=( + 'account:Manage Claude accounts' + 'acct:Short alias for account' + 'worktree:Manage git worktrees' + 'wt:Short alias for worktree' + 'run:Shortcut for worktree run' + 'config:View and modify Ckipper settings' + 'setup:Run / re-run the setup wizard' + 'doctor:Diagnostic check of accounts and tooling' + 'help:Show top-level help' + ) + account_subs=( + 'add:Register a new account' + 'list:Show registered accounts' + 'default:Set the default account' + 'remove:Unregister an account' + 'rename:Rename an account in place' + 'sync:Copy state between accounts (interactive or via --include)' + 'redeploy-hooks:Redeploy ckipper safety hooks from install to all accounts' + 'help:Show account-namespace help' + ) + worktree_subs=( + 'run:Create-or-cd worktree, optionally Docker' + 'list:List all worktrees' + 'rm:Remove worktree + delete branch' + 'rebuild-image:Rebuild ckipper-dev Docker image' + 'help:Show worktree-namespace help' + ) + config_subs=( + 'get:Read a config value' + 'set:Write a config value' + 'unset:Remove a config override' + 'list:Show all config values' + 'edit:Open the config file in $EDITOR' + 'help:Show config-namespace help' + ) + + _arguments -C \ + '1: :->cmd' \ + '2: :->sub' \ + '3: :->arg3' \ + '4: :->arg4' \ + '*:: :->args' \ + && return 0 + + case $state in + cmd) + _describe -t commands 'ckipper command' top_commands && return 0 + ;; + sub) + case "${words[2]}" in + account|acct) + _describe -t subcommands 'account subcommand' account_subs && return 0 + ;; + worktree|wt) + _describe -t subcommands 'worktree subcommand' worktree_subs && return 0 + ;; + config) + _describe -t subcommands 'config subcommand' config_subs && return 0 + ;; + run) + local -a projects + local dir repo_dir rel + while IFS= read -r -d '' dir; do + repo_dir="${dir:h}" + rel="${repo_dir#$projects_dir/}" + projects+=("$rel") + done < <(find "$projects_dir" -maxdepth 3 -name ".git" -type d -not -path "*/.worktrees/*" -print0 2>/dev/null) + _describe -t projects 'project' projects && return 0 + ;; + esac + ;; + arg3) + case "${words[2]}/${words[3]}" in + worktree/run|wt/run|worktree/rm|wt/rm) + local -a projects + local dir repo_dir rel + while IFS= read -r -d '' dir; do + repo_dir="${dir:h}" + rel="${repo_dir#$projects_dir/}" + projects+=("$rel") + done < <(find "$projects_dir" -maxdepth 3 -name ".git" -type d -not -path "*/.worktrees/*" -print0 2>/dev/null) + _describe -t projects 'project' projects && return 0 + ;; + account/default|acct/default|account/remove|acct/remove|account/rename|acct/rename|account/sync|acct/sync) + local -a accounts + if [[ -f "${CKIPPER_REGISTRY:-$HOME/.ckipper/accounts.json}" ]]; then + accounts=( $(jq -r '.accounts | keys[]' "${CKIPPER_REGISTRY:-$HOME/.ckipper/accounts.json}" 2>/dev/null) ) + fi + _describe -t accounts 'account name' accounts && return 0 + ;; + config/get|config/set|config/unset) + local -a config_keys + config_keys=( "${(@k)_CKIPPER_SCHEMA_TYPE}" ) + _describe -t keys 'config key' config_keys && return 0 + ;; + esac + case "${words[2]}" in + run) + local project="${words[3]}" + [[ -z "$project" ]] && return 0 + local -a worktrees + if [[ -d "$worktrees_dir/$project" ]]; then + for wt in $worktrees_dir/$project/*(N/); do + worktrees+=(${wt:t}) + done + fi + if (( ${#worktrees} > 0 )); then + _describe -t worktrees 'existing worktree' worktrees + else + _message 'new worktree branch name' + fi + ;; + esac + ;; + arg4) + case "${words[2]}/${words[3]}" in + worktree/run|wt/run|worktree/rm|wt/rm) + local project="${words[4]}" + [[ -z "$project" ]] && return 0 + local -a worktrees + if [[ -d "$worktrees_dir/$project" ]]; then + for wt in $worktrees_dir/$project/*(N/); do + worktrees+=(${wt:t}) + done + fi + if (( ${#worktrees} > 0 )); then + _describe -t worktrees 'existing worktree' worktrees + else + _message 'new worktree branch name' + fi + ;; + account/sync|acct/sync) + local -a accounts + if [[ -f "${CKIPPER_REGISTRY:-$HOME/.ckipper/accounts.json}" ]]; then + accounts=( $(jq -r '.accounts | keys[]' "${CKIPPER_REGISTRY:-$HOME/.ckipper/accounts.json}" 2>/dev/null) ) + fi + _describe -t accounts 'additional sync target' accounts && return 0 + ;; + esac + ;; + args) + case "${words[2]}/${words[3]}" in + worktree/run|wt/run) + local -a flags + flags=( + '--docker:Run inside the ckipper-dev Docker container' + '--no-docker:Force host-only run (override always_docker preference)' + '--firewall:Add egress firewall (requires --docker)' + '--no-firewall:Disable firewall (override always_firewall preference)' + '--ssh-forward:Mount ~/.ssh into container' + '--no-ssh-forward:Do not mount ~/.ssh (override ssh_forward preference)' + '--account:Use a specific Ckipper account' + ) + _describe -t flags 'flag' flags + _command_names -e + ;; + account/sync|acct/sync) + local -a sync_flags + sync_flags=( + '--include:Comma-separated types or named bundle (all/customizations/claude-config/preferences)' + '--exclude:Subtract from --include' + '--dry-run:Print summary; no writes' + '--yes:Skip the confirm prompt; apply directly' + '--force:Bypass the destination-Claude-running refusal' + ) + _describe -t flags 'sync flag' sync_flags + ;; + esac + case "${words[2]}" in + run) + local -a flags + flags=( + '--docker:Run inside the ckipper-dev Docker container' + '--no-docker:Force host-only run (override always_docker preference)' + '--firewall:Add egress firewall (requires --docker)' + '--no-firewall:Disable firewall (override always_firewall preference)' + '--ssh-forward:Mount ~/.ssh into container' + '--no-ssh-forward:Do not mount ~/.ssh (override ssh_forward preference)' + '--account:Use a specific Ckipper account' + ) + _describe -t flags 'flag' flags + _command_names -e + ;; + esac + ;; + esac +} + +_ckipper "$@" +COMPEOF +fi diff --git a/ckipper_config_test.bats b/ckipper_config_test.bats new file mode 100644 index 0000000..180ea6c --- /dev/null +++ b/ckipper_config_test.bats @@ -0,0 +1,82 @@ +#!/usr/bin/env bats +# Top-level dispatcher tests for the `config` namespace routing. +# +# Verifies that `ckipper config ` reaches the namespace dispatcher +# and that the schema + core config + handlers are sourced into ckipper.zsh. +# +# ckipper.zsh is zsh-only; bats runs under bash, so each test spawns a zsh +# subprocess via run_ckipper(). + +load "${BATS_TEST_DIRNAME}/tests/lib/test-helper.bash" + +setup() { + setup_isolated_env + mkdir -p "$CKIPPER_DIR/docker" + : >"$CKIPPER_DIR/docker/ckipper-config.zsh" + cat >"$CKIPPER_REGISTRY" <<'JSON' +{"version":2,"default":"work","accounts":{"work":{"config_dir":"/x","keychain_service":null,"registered_at":"t","preferences":{}}}} +JSON +} + +teardown() { + teardown_isolated_env +} + +@test "ckipper config help prints config-namespace help" { + run_ckipper config help + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper config" ]] +} + +@test "ckipper config list runs and prints something" { + run_ckipper config list + [ "$status" -eq 0 ] + [ -n "$output" ] +} + +@test "ckipper config get notify_bell returns schema default" { + run_ckipper config get notify_bell + [ "$status" -eq 0 ] + [[ "$output" =~ "true" ]] +} + +@test "ckipper config set + get round-trips through dispatcher" { + run_ckipper config set notify_bell false + [ "$status" -eq 0 ] + + run_ckipper config get notify_bell + [ "$status" -eq 0 ] + [[ "$output" =~ "false" ]] +} + +@test "ckipper config set with no value prompts and writes input" { + # CKIPPER_NO_GUM=1 is forwarded by run_ckipper so the fallback `read` path + # consumes the piped stdin instead of trying to launch gum. + run env \ + HOME="$TMP_HOME" \ + CKIPPER_DIR="$CKIPPER_DIR" \ + CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + PATH="$PATH" \ + _CKIPPER_TEST_OSTYPE="${_CKIPPER_TEST_OSTYPE:-linux}" \ + CKIPPER_FORCE="${CKIPPER_FORCE:-1}" \ + CKIPPER_NO_GUM=1 \ + zsh -c "source \"$REPO_ROOT/ckipper.zsh\"; ckipper config set notify_bell" <<<"false" + [ "$status" -eq 0 ] + + run_ckipper config get notify_bell + [ "$status" -eq 0 ] + [[ "$output" =~ "false" ]] +} + +@test "ckipper config unknown subcommand suggests help pointer" { + run_ckipper config nope + [ "$status" -ne 0 ] + [[ "$output" =~ "Unknown command:" ]] + [[ "$output" =~ "config help" ]] +} + +@test "ckipper config (no subcommand) prints overview help" { + run_ckipper config + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper config" ]] +} diff --git a/ckipper_setup_test.bats b/ckipper_setup_test.bats new file mode 100644 index 0000000..50754a2 --- /dev/null +++ b/ckipper_setup_test.bats @@ -0,0 +1,51 @@ +#!/usr/bin/env bats +# Top-level dispatcher tests for the `setup` namespace routing. +# +# Verifies that `ckipper setup --help` reaches the dispatcher, that `setup` +# is listed by `ckipper help`, and that the completion-version sentinel + the +# new source block correctly wire `lib/setup/dispatcher.zsh` into ckipper.zsh. +# +# ckipper.zsh is zsh-only; bats runs under bash, so each test spawns a zsh +# subprocess via run_ckipper(). + +load "${BATS_TEST_DIRNAME}/tests/lib/test-helper.bash" + +setup() { + setup_isolated_env + mkdir -p "$CKIPPER_DIR/docker" + : >"$CKIPPER_DIR/docker/ckipper-config.zsh" + cat >"$CKIPPER_REGISTRY" <<'JSON' +{"version":2,"default":null,"accounts":{}} +JSON +} + +teardown() { + teardown_isolated_env +} + +@test "ckipper setup --help prints setup help" { + run_ckipper setup --help + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper setup" ]] + [[ "$output" =~ "interactive wizard" ]] +} + +@test "ckipper setup -h is equivalent to --help" { + run_ckipper setup -h + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper setup" ]] +} + +@test "ckipper help mentions setup as a top-level command" { + run_ckipper help + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper setup" ]] +} + +@test "ckipper unknown command does not get routed to setup" { + # `setp` is close enough to `setup` for the fuzzy-match suggestion to + # mention setup, but it MUST NOT exit 0 — that would mean the dispatcher + # silently swallowed an unknown command. + run_ckipper setp + [ "$status" -ne 0 ] +} diff --git a/ckipper_test.bats b/ckipper_test.bats new file mode 100644 index 0000000..529ca50 --- /dev/null +++ b/ckipper_test.bats @@ -0,0 +1,161 @@ +#!/usr/bin/env bats +# Top-level dispatcher tests for ckipper(). +# +# Verifies that namespace routing (account, worktree, doctor) works, that the +# acct/wt short forms are equivalent, that bare/help prints overview, that +# unknown commands fuzzy-suggest, and that namespace subcommands are reachable +# through the new dispatcher. +# +# ckipper.zsh is zsh-only (uses read "?..." prompt syntax, setopt, etc.). +# Bats runs under bash, so every test spawns a zsh subprocess via run_ckipper(). + +load "${BATS_TEST_DIRNAME}/tests/lib/test-helper.bash" + +setup() { + setup_isolated_env +} + +teardown() { + teardown_isolated_env +} + +# ── Top-level routing ──────────────────────────────────────────────── + +@test "ckipper (bare) opens the launcher menu" { + # Bare `ck` opens the interactive launcher (Phase 5.2). With no stdin, the + # underlying choose prompt hits EOF (or gum errors out without a TTY) and + # the menu exits non-zero. We assert on the banner + tagline because they + # render before the prompt regardless of the gum / pure-zsh code path. + run_ckipper + + [ "$status" -ne 0 ] + [[ "$output" =~ "Ckipper" ]] + [[ "$output" =~ "Multi-account" ]] +} + +@test "ckipper help prints top-level help" { + run_ckipper help + [ "$status" -eq 0 ] + [[ "$output" =~ "Short alias" ]] +} + +@test "ckipper unknown-command fuzzy-suggests when close" { + run_ckipper accont + [ "$status" -ne 0 ] + [[ "$output" =~ "Unknown command: 'accont'. Did you mean: 'account'?" ]] +} + +@test "ckipper unknown-command shows bare error when no close match" { + run_ckipper xyzzy + [ "$status" -ne 0 ] + [[ "$output" =~ "Unknown command: 'xyzzy'." ]] + [[ ! "$output" =~ "Did you mean" ]] +} + +# ── Account namespace + alias ──────────────────────────────────────── + +@test "ckipper account help prints account-namespace help" { + run_ckipper account help + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper account" ]] + [[ "$output" =~ "Short form" ]] +} + +@test "ckipper acct help is equivalent to ckipper account help" { + run_ckipper acct help + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper account" ]] +} + +@test "ckipper account list prints message when no accounts registered" { + rm -f "$CKIPPER_REGISTRY" + run_ckipper account list + [ "$status" -eq 0 ] + [[ "$output" =~ "No accounts" || "$output" =~ "no accounts" ]] +} + +@test "ckipper acct list works through the short alias" { + rm -f "$CKIPPER_REGISTRY" + run_ckipper acct list + [ "$status" -eq 0 ] + [[ "$output" =~ "No accounts" || "$output" =~ "no accounts" ]] +} + +@test "ckipper account add --help prints add-specific help" { + run_ckipper account add --help + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper account add" ]] + [[ "$output" =~ "--adopt" ]] +} + +@test "ckipper account remove rejects unknown name" { + echo '{"version":1,"default":null,"accounts":{}}' > "$CKIPPER_REGISTRY" + run_ckipper account remove nobody + [ "$status" -ne 0 ] + [[ "$output" =~ "not registered" ]] +} + +# ── Worktree namespace + alias ─────────────────────────────────────── + +@test "ckipper worktree help prints worktree-namespace help" { + run_ckipper worktree help + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper worktree" ]] + [[ "$output" =~ "Short form" ]] +} + +@test "ckipper wt help is equivalent to ckipper worktree help" { + run_ckipper wt help + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper worktree" ]] +} + +@test "ckipper worktree run --help prints run-specific help" { + run_ckipper worktree run --help + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper worktree run" ]] + [[ "$output" =~ "--docker" ]] +} + +@test "ckipper wt run with no args prints help and exits 1" { + run_ckipper wt run + [ "$status" -ne 0 ] + [[ "$output" =~ "ckipper worktree run" ]] +} + +# ── Doctor (top-level command, not a namespace) ────────────────────── + +@test "ckipper doctor --help prints doctor-specific help" { + run_ckipper doctor --help + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper doctor" ]] + [[ "$output" =~ "Registry validity" || "$output" =~ "registry" ]] +} + +@test "ckipper doctor exits 0 when registry missing (no accounts)" { + rm -f "$CKIPPER_REGISTRY" + run_ckipper doctor + [ "$status" -eq 0 ] +} + +@test "ckipper doctor mentions registry in output" { + echo '{"version":1,"default":null,"accounts":{}}' > "$CKIPPER_REGISTRY" + run_ckipper doctor + [[ "$output" =~ [Rr]egistry ]] +} + +# ── tab completion version sentinel ────────────────────────────────── + +# The completion-file regeneration mechanism relies on the literal +# "# ckipper-completion-version=N" sentinel inside the single-quoted heredoc +# matching the CKIPPER_COMPLETION_VERSION variable referenced in the grep +# check immediately above. If a future bump only updates one side, existing +# installs silently fail to regenerate. This test guards against that drift. +@test "ckipper.zsh: completion version sentinel matches outer variable" { + local outer inner + outer=$(grep -E '^CKIPPER_COMPLETION_VERSION=' "$REPO_ROOT/ckipper.zsh" | head -1 | cut -d= -f2) + inner=$(grep -E '^# ckipper-completion-version=' "$REPO_ROOT/ckipper.zsh" | head -1 | cut -d= -f2) + [ -n "$outer" ] + [ -n "$inner" ] + [ "$outer" = "$inner" ] +} diff --git a/docker/Dockerfile b/docker/Dockerfile index 2568604..0462934 100644 --- a/docker/Dockerfile +++ b/docker/Dockerfile @@ -1,6 +1,6 @@ FROM node:24-slim -# Every `w --rebuild-image` passes a unique CACHEBUST value, so all layers +# Every `ckipper worktree rebuild-image` passes a unique CACHEBUST value, so all layers # below are re-fetched. This keeps system packages, CLI tools, and Claude # Code current without ever going stale. Only the base image (node:24-slim) # is cached — pull it manually with `docker pull node:24-slim` if needed. @@ -37,13 +37,26 @@ ENV CHROMIUM_FLAGS="--no-sandbox" ENV PUPPETEER_CHROMIUM_ARGS="--no-sandbox" # Install uv/uvx (Python package runner — needed for Python-based MCP servers) -RUN curl -LsSf https://astral.sh/uv/install.sh | sh \ +# SHA256 fetched 2026-04-30 from https://astral.sh/uv/install.sh +ARG UV_INSTALLER_SHA256=facbed3a7e2750df3aef698537c6d50869b025a58bdd13154cfdcc2f30354ca2 +RUN curl -LsSf https://astral.sh/uv/install.sh -o /tmp/install.sh && \ + echo "${UV_INSTALLER_SHA256} /tmp/install.sh" | sha256sum -c - && \ + sh /tmp/install.sh && \ + rm /tmp/install.sh \ && cp /root/.local/bin/uv /usr/local/bin/uv \ && cp /root/.local/bin/uvx /usr/local/bin/uvx \ && chmod +x /usr/local/bin/uv /usr/local/bin/uvx # Install Claude Code via native installer -RUN curl -fsSL https://claude.ai/install.sh | bash \ +# SHA256 fetched 2026-04-30 from https://claude.ai/install.sh +# Note: this is a bootstrap installer that fetches the actual binary from +# downloads.claude.ai. The bootstrap script itself changes infrequently, but +# refresh this hash if a Claude Code release ships an updated installer. +ARG CLAUDE_INSTALLER_SHA256=b315b46925a9bfb9422f2503dd5aa649f680832f4c076b22d87c39d578c3d830 +RUN curl -fsSL https://claude.ai/install.sh -o /tmp/install.sh && \ + echo "${CLAUDE_INSTALLER_SHA256} /tmp/install.sh" | sha256sum -c - && \ + bash /tmp/install.sh && \ + rm /tmp/install.sh \ && cp /root/.local/bin/claude /usr/local/bin/claude \ && chmod +x /usr/local/bin/claude @@ -58,7 +71,12 @@ RUN userdel -r node 2>/dev/null; groupdel node 2>/dev/null; \ && chown -R claude:claude /home/claude/.local /home/claude/.ssh /home/claude/.config /home/claude/.cache /home/claude/.uv-tools # Install bun (fast JS runtime — needed for bunx-based statusline commands and general use) -RUN curl -fsSL https://bun.sh/install | bash \ +# SHA256 fetched 2026-04-30 from https://bun.sh/install +ARG BUN_INSTALLER_SHA256=bab8acfb046aac8c72407bdcce903957665d655d7acaa3e11c7c4616beae68dd +RUN curl -fsSL https://bun.sh/install -o /tmp/install.sh && \ + echo "${BUN_INSTALLER_SHA256} /tmp/install.sh" | sha256sum -c - && \ + bash /tmp/install.sh && \ + rm /tmp/install.sh \ && cp /root/.bun/bin/bun /usr/local/bin/bun \ && cp /root/.bun/bin/bunx /usr/local/bin/bunx \ && chmod +x /usr/local/bin/bun /usr/local/bin/bunx diff --git a/docker/cleanup-projects.py b/docker/cleanup-projects.py new file mode 100755 index 0000000..9c94df1 --- /dev/null +++ b/docker/cleanup-projects.py @@ -0,0 +1,175 @@ +#!/usr/bin/env python3 +"""Cleanup helpers invoked from ckipper.zsh inside the container.""" + +import json +import os +import sys + +CONFIG_DIR_KEY = "config_dir" +CLAUDE_JSON_FILENAME = ".claude.json" +PROJECTS_KEY = "projects" + +SETTINGS_KEYS_TO_SYNC = [ + "disabledMcpServers", + "enabledMcpjsonServers", + "disabledMcpjsonServers", + "allowedTools", + "hasTrustDialogAccepted", + "hasClaudeMdExternalIncludesApproved", + "hasClaudeMdExternalIncludesWarningShown", + "hasCompletedProjectOnboarding", +] + + +def all_account_dirs(registry_path: str) -> list: + """List config_dir entries for every registered account. + + Args: + registry_path: Absolute path to accounts.json. + + Returns: + List of config_dir strings; empty if registry is missing. + """ + if not os.path.exists(registry_path): + return [] + with open(registry_path) as f: + data = json.load(f) + return [ + account[CONFIG_DIR_KEY] + for account in data.get("accounts", {}).values() + if account.get(CONFIG_DIR_KEY) + ] + + +def _load_settings(claude_json_path: str) -> dict: + """Load .claude.json. Returns empty dict if missing or unparseable.""" + if not os.path.exists(claude_json_path): + return {} + try: + with open(claude_json_path) as f: + return json.load(f) + except (json.JSONDecodeError, OSError): + return {} + + +def _merge_settings_keys(target: dict, source: dict, keys: list) -> None: + """Copy specified keys from source dict into target (in place). + + Args: + target: Dict to copy keys into. + source: Dict to copy keys from. + keys: List of key names to copy. + """ + for key in keys: + if key in source: + target[key] = source[key] + + +def _write_settings(claude_json_path: str, data: dict) -> None: + """Write settings dict to .claude.json. + + Args: + claude_json_path: Absolute path to .claude.json. + data: Settings dict to write. + """ + with open(claude_json_path, "w") as f: + json.dump(data, f) + + +def remove_worktree_from_all(registry_path: str, worktree_path: str) -> None: + """Strip the given worktree's project entry from every account's .claude.json. + + Args: + registry_path: Path to accounts.json. + worktree_path: Absolute worktree path to remove. + """ + seen = set() + for cfg_dir in all_account_dirs(registry_path): + cfg = os.path.join(cfg_dir, CLAUDE_JSON_FILENAME) + cfg_real = os.path.realpath(cfg) + if cfg_real in seen: + continue + seen.add(cfg_real) + data = _load_settings(cfg) + if not data: + continue + if worktree_path in data.get(PROJECTS_KEY, {}): + del data[PROJECTS_KEY][worktree_path] + _write_settings(cfg, data) + print(f"Removed worktree entry from {cfg}") + + +def _find_account_config(registry_path: str, account_name: str) -> str: + """Return path to .claude.json for the given account, or empty string. + + Args: + registry_path: Path to accounts.json. + account_name: Name of the account to look up. + + Returns: + Absolute path to .claude.json, or empty string if not found. + """ + if not os.path.exists(registry_path): + return "" + with open(registry_path) as f: + data = json.load(f) + account = data.get("accounts", {}).get(account_name) + if not account: + return "" + return os.path.join(account[CONFIG_DIR_KEY], CLAUDE_JSON_FILENAME) + + +def sync_worktree_settings(context: dict) -> None: + """Sync settings between main repo and a worktree. + + Args: + context: Dict with keys 'registry_path', 'account_name', + 'main_path', 'worktree_path'. + + Raises: + KeyError: If context is missing required keys. + """ + registry_path = context["registry_path"] + account_name = context["account_name"] + main_path = context["main_path"] + worktree_path = context["worktree_path"] + + cfg = _find_account_config(registry_path, account_name) + if not cfg: + return + + settings = _load_settings(cfg) + if not settings: + return + + main_project = settings.get(PROJECTS_KEY, {}).get(main_path, {}) + if not main_project: + return + + wt_project = settings.setdefault(PROJECTS_KEY, {}).setdefault(worktree_path, {}) + _merge_settings_keys(wt_project, main_project, SETTINGS_KEYS_TO_SYNC) + _write_settings(cfg, settings) + print(f"Synced settings for {worktree_path} in {cfg}") + + +if __name__ == "__main__": + if len(sys.argv) < 3: + sys.exit("Usage: cleanup-projects.py [arg2] [arg3]") + + cmd = sys.argv[1] + registry = os.environ.get("CKIPPER_REGISTRY", os.path.expanduser("~/.ckipper/accounts.json")) + + if cmd == "remove": + remove_worktree_from_all(registry, sys.argv[2]) + elif cmd == "sync": + if len(sys.argv) < 5: + sys.exit("Usage: cleanup-projects.py sync ") + context = { + "registry_path": registry, + "account_name": sys.argv[2], + "main_path": sys.argv[3], + "worktree_path": sys.argv[4], + } + sync_worktree_settings(context) + else: + sys.exit(f"Unknown command: {cmd}") diff --git a/docker/cleanup-projects_test.py b/docker/cleanup-projects_test.py new file mode 100644 index 0000000..9425df8 --- /dev/null +++ b/docker/cleanup-projects_test.py @@ -0,0 +1,112 @@ +"""Tests for cleanup-projects.py.""" + +import json +import os +from pathlib import Path + +import importlib.util + +_HERE = Path(__file__).parent +_SPEC = importlib.util.spec_from_file_location( + "cleanup_projects", _HERE / "cleanup-projects.py" +) +_MOD = importlib.util.module_from_spec(_SPEC) +_SPEC.loader.exec_module(_MOD) + +all_account_dirs = _MOD.all_account_dirs +remove_worktree_from_all = _MOD.remove_worktree_from_all +sync_worktree_settings = _MOD.sync_worktree_settings +_load_settings = _MOD._load_settings +_merge_settings_keys = _MOD._merge_settings_keys + + +def test_all_account_dirs_returns_config_dirs(tmp_path): + """Returns config_dir for every registered account.""" + registry = tmp_path / "accounts.json" + registry.write_text(json.dumps({ + "version": 1, "default": None, + "accounts": { + "a": {"config_dir": "/tmp/a"}, + "b": {"config_dir": "/tmp/b"}, + }, + })) + + result = all_account_dirs(str(registry)) + + assert sorted(result) == ["/tmp/a", "/tmp/b"] + + +def test_all_account_dirs_returns_empty_for_missing_registry(tmp_path): + """Returns empty list when registry does not exist.""" + result = all_account_dirs(str(tmp_path / "nonexistent.json")) + + assert result == [] + + +def test_load_settings_returns_empty_dict_when_missing(tmp_path): + """Missing .claude.json returns empty dict (not exception).""" + result = _load_settings(str(tmp_path / "nope.json")) + + assert result == {} + + +def test_merge_settings_keys_copies_specified_keys(): + """Merge copies only the listed keys from source into target.""" + target = {"a": 1} + source = {"a": 2, "b": 3, "c": 4} + + _merge_settings_keys(target, source, ["b"]) + + assert target == {"a": 1, "b": 3} + assert "c" not in target + + +def test_remove_worktree_from_all_strips_project_key(tmp_path): + """remove_worktree_from_all removes the worktree's entry from each account's .claude.json.""" + account_a = tmp_path / "account-a" + account_a.mkdir() + (account_a / ".claude.json").write_text(json.dumps({ + "projects": {"/wt/foo": {"setting": "value"}, "/wt/bar": {}} + })) + + registry = tmp_path / "accounts.json" + registry.write_text(json.dumps({ + "version": 1, "default": None, + "accounts": {"a": {"config_dir": str(account_a)}}, + })) + + remove_worktree_from_all(str(registry), "/wt/foo") + + after = json.loads((account_a / ".claude.json").read_text()) + assert "/wt/foo" not in after.get("projects", {}) + assert "/wt/bar" in after.get("projects", {}) + + +def test_sync_worktree_settings_with_dict_context(tmp_path): + """sync_worktree_settings accepts a context dict (refactored from 4 positional args).""" + main = tmp_path / "main" + wt = tmp_path / "wt" + main.mkdir() + wt.mkdir() + account = tmp_path / "account" + account.mkdir() + + main_claude = main / ".claude.json" + wt_claude = wt / ".claude.json" + main_claude.write_text(json.dumps({"projects": {str(main): {"key": "from_main"}}})) + + registry = tmp_path / "accounts.json" + registry.write_text(json.dumps({ + "version": 1, "default": None, + "accounts": {"acct": {"config_dir": str(account)}}, + })) + + context = { + "registry_path": str(registry), + "account_name": "acct", + "main_path": str(main), + "worktree_path": str(wt), + } + + # Should not raise + sync_worktree_settings(context) diff --git a/docker/entrypoint.sh b/docker/entrypoint.sh index b1060c6..1f07127 100755 --- a/docker/entrypoint.sh +++ b/docker/entrypoint.sh @@ -1,14 +1,28 @@ #!/bin/bash set -e -# Copy host's .claude.json to writable location (mounted read-only to avoid race condition) -if [ -f "$HOME/.claude-host.json" ]; then - cp "$HOME/.claude-host.json" "$HOME/.claude.json" - # Disable Chrome extension check in container (no browser available) - if command -v jq &>/dev/null; then - jq '.claudeInChromeDefaultEnabled = false | .cachedChromeExtensionInstalled = false' \ - "$HOME/.claude.json" > "$HOME/.claude.json.tmp" && mv "$HOME/.claude.json.tmp" "$HOME/.claude.json" - fi +# Constants +readonly CREDENTIALS_MAX_BYTES=1048576 # 1 MiB (2^20) +readonly GIT_CONFIG_COUNT=2 + +# Require CLAUDE_CONFIG_DIR — Ckipper's account context. No silent fallback. +if [ -z "$CLAUDE_CONFIG_DIR" ]; then + echo "Error: CLAUDE_CONFIG_DIR is not set inside the container." >&2 + echo "This means ckipper worktree run did not pass the account context. Bug — please report." >&2 + exit 1 +fi +if [ ! -d "$CLAUDE_CONFIG_DIR" ]; then + echo "Error: CLAUDE_CONFIG_DIR=$CLAUDE_CONFIG_DIR does not exist (mount failed?)." >&2 + exit 1 +fi + +# Disable Chrome extension check in the bind-mounted .claude.json (no browser in container). +# This mutates the host file too — accepted, because the same-account-twice rule +# prevents concurrent host/container use of the same file. +if [ -f "$CLAUDE_CONFIG_DIR/.claude.json" ] && command -v jq &>/dev/null; then + jq '.claudeInChromeDefaultEnabled = false | .cachedChromeExtensionInstalled = false' \ + "$CLAUDE_CONFIG_DIR/.claude.json" >"$CLAUDE_CONFIG_DIR/.claude.json.tmp" && + mv "$CLAUDE_CONFIG_DIR/.claude.json.tmp" "$CLAUDE_CONFIG_DIR/.claude.json" fi # Copy SSH config from staging mount, stripping macOS-specific options. @@ -22,22 +36,27 @@ if [ -d "$HOME/.ssh-host" ]; then fi fi -# Write credentials to tmpfs (not the host-mounted ~/.claude — prevents credential +# Write credentials to tmpfs (not the host-mounted account dir — prevents credential # leakage to the host filesystem). The tmpfs mount at /tmp/claude-creds is # container-local and disappears when the container exits. if [ -n "$CLAUDE_CREDENTIALS" ]; then + creds_byte_count=$(printf '%s' "$CLAUDE_CREDENTIALS" | wc -c) + if [ "$creds_byte_count" -gt "$CREDENTIALS_MAX_BYTES" ]; then + echo "Error: CLAUDE_CREDENTIALS exceeds $CREDENTIALS_MAX_BYTES bytes; refusing to write" >&2 + exit 1 + fi mkdir -p /tmp/claude-creds - echo "$CLAUDE_CREDENTIALS" > /tmp/claude-creds/.credentials.json + printf '%s' "$CLAUDE_CREDENTIALS" >/tmp/claude-creds/.credentials.json chmod 700 /tmp/claude-creds chmod 600 /tmp/claude-creds/.credentials.json - # Symlink from expected location — Claude Code reads ~/.claude/.credentials.json - ln -sf /tmp/claude-creds/.credentials.json "$HOME/.claude/.credentials.json" + # Symlink from the account dir — Claude Code reads $CLAUDE_CONFIG_DIR/.credentials.json + ln -sf /tmp/claude-creds/.credentials.json "$CLAUDE_CONFIG_DIR/.credentials.json" fi # Set git identity from .claude.json account info (needed for commits inside container) -if [ -f "$HOME/.claude.json" ] && command -v jq &>/dev/null; then - git_name=$(jq -r '.oauthAccount.displayName // empty' "$HOME/.claude.json" 2>/dev/null) - git_email=$(jq -r '.oauthAccount.emailAddress // empty' "$HOME/.claude.json" 2>/dev/null) +if [ -f "$CLAUDE_CONFIG_DIR/.claude.json" ] && command -v jq &>/dev/null; then + git_name=$(jq -r '.oauthAccount.displayName // empty' "$CLAUDE_CONFIG_DIR/.claude.json" 2>/dev/null) + git_email=$(jq -r '.oauthAccount.emailAddress // empty' "$CLAUDE_CONFIG_DIR/.claude.json" 2>/dev/null) [ -n "$git_name" ] && git config --global user.name "$git_name" [ -n "$git_email" ] && git config --global user.email "$git_email" fi @@ -46,7 +65,7 @@ fi # Uses GIT_CONFIG_COUNT instead of git config so we never modify the host's # .git/config (mounted rw). Env vars take highest priority, overriding both # local and global config, and disappear when the container exits. -export GIT_CONFIG_COUNT=2 +export GIT_CONFIG_COUNT export GIT_CONFIG_KEY_0=commit.gpgsign export GIT_CONFIG_VALUE_0=false export GIT_CONFIG_KEY_1=tag.gpgsign @@ -81,7 +100,7 @@ export TURBO_CACHE_DIR=/workspace/.turbo/cache # unsets NO_COLOR to prevent chalk from stripping ANSI codes. export FORCE_COLOR=3 export COLORTERM=truecolor -cat > "$HOME/.local/bin/bunx" << 'WRAPPER' +cat >"$HOME/.local/bin/bunx" <<'WRAPPER' #!/bin/bash export FORCE_COLOR=3 export COLORTERM=truecolor @@ -112,17 +131,17 @@ uv_bin_dir="${UV_TOOL_BIN_DIR:-$HOME/.local/bin}" mkdir -p "$uv_bin_dir" "${UV_TOOL_DIR:-$HOME/.local/share/uv/tools}" "${UV_PYTHON_INSTALL_DIR:-$HOME/.local/share/uv/python}" 2>/dev/null || true export PATH="$uv_bin_dir:$PATH" -if [ -f "$HOME/.claude.json" ] && command -v jq &>/dev/null && command -v uv &>/dev/null; then +if [ -f "$CLAUDE_CONFIG_DIR/.claude.json" ] && command -v jq &>/dev/null && command -v uv &>/dev/null; then uvx_servers=$(jq -r ' .mcpServers // {} | to_entries[] | select(.value.command == "uvx") | .key - ' "$HOME/.claude.json" 2>/dev/null) + ' "$CLAUDE_CONFIG_DIR/.claude.json" 2>/dev/null) if [ -n "$uvx_servers" ]; then echo "Pre-installing uvx-based MCP servers..." while IFS= read -r name; do [ -z "$name" ] && continue - pkg=$(jq -r ".mcpServers[\"$name\"].args[0]" "$HOME/.claude.json") + pkg=$(jq -r ".mcpServers[\"$name\"].args[0]" "$CLAUDE_CONFIG_DIR/.claude.json") [ -z "$pkg" ] && continue # Derive binary name from package spec @@ -148,13 +167,13 @@ if [ -f "$HOME/.claude.json" ] && command -v jq &>/dev/null && command -v uv &>/ jq --arg n "$name" --arg b "$bin_path" ' .mcpServers[$n].command = $b | .mcpServers[$n].args = .mcpServers[$n].args[1:] - ' "$HOME/.claude.json" > "$HOME/.claude.json.tmp" \ - && mv "$HOME/.claude.json.tmp" "$HOME/.claude.json" + ' "$CLAUDE_CONFIG_DIR/.claude.json" >"$CLAUDE_CONFIG_DIR/.claude.json.tmp" && + mv "$CLAUDE_CONFIG_DIR/.claude.json.tmp" "$CLAUDE_CONFIG_DIR/.claude.json" echo " $name -> $bin_path" else echo " $name: binary not found at $bin_path, keeping uvx" fi - done <<< "$uvx_servers" + done <<<"$uvx_servers" fi fi diff --git a/docker/entrypoint_test.bats b/docker/entrypoint_test.bats new file mode 100644 index 0000000..502acdf --- /dev/null +++ b/docker/entrypoint_test.bats @@ -0,0 +1,25 @@ +#!/usr/bin/env bats + +load "${BATS_TEST_DIRNAME}/../tests/lib/test-helper.bash" + +setup() { + [ "${BATS_INTEGRATION:-}" = "1" ] || skip "entrypoint tests gated by BATS_INTEGRATION=1" + setup_isolated_env +} + +teardown() { + [ "${BATS_INTEGRATION:-}" = "1" ] || return 0 + teardown_isolated_env +} + +@test "entrypoint writes credentials without trailing newline" { + skip "Requires container env" +} + +@test "entrypoint enforces 1MB credentials bounds check" { + skip "Requires container env" +} + +@test "entrypoint configures git from oauth account name" { + skip "Requires container env" +} diff --git a/docker/init-firewall.sh b/docker/init-firewall.sh index fb2a09b..ac2b173 100755 --- a/docker/init-firewall.sh +++ b/docker/init-firewall.sh @@ -7,6 +7,9 @@ set -e # Must run as root (via sudo from entrypoint.sh). # Uses iptables-legacy because Docker Desktop's VM doesn't support nf_tables. +# Constants +readonly FIREWALL_VERIFY_TIMEOUT=5 + # Whitelisted domains — edit this list to add/remove allowed destinations ALLOWED_DOMAINS=( # Claude / Anthropic @@ -45,9 +48,20 @@ echo "=== Configuring egress firewall ===" DNS_SERVER=$(grep '^nameserver' /etc/resolv.conf | head -1 | awk '{print $2}') echo " DNS server: $DNS_SERVER" +# Validate DNS server is a proper IPv4 address +if [[ ! $DNS_SERVER =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then + echo "Error: invalid DNS server '$DNS_SERVER'" >&2 + exit 1 +fi + # Flush existing rules iptables-legacy -F OUTPUT 2>/dev/null || true +# Set default-deny FIRST so the bootstrap window (between flush and final +# rule installation) inherits deny-by-default. Allow rules added below take +# effect via -A; anything not matched falls through to the DROP policy. +iptables-legacy -P OUTPUT DROP + # Allow loopback iptables-legacy -A OUTPUT -o lo -j ACCEPT @@ -65,24 +79,77 @@ for domain in "${ALLOWED_DOMAINS[@]}"; do for ip in $ips; do iptables-legacy -A OUTPUT -d "$ip" -j ACCEPT 2>/dev/null && ((ip_count++)) || true done - echo " Allowed: $domain ($(echo $ips | tr '\n' ' '))" + echo " Allowed: $domain ($(echo "$ips" | tr '\n' ' '))" done # Fetch GitHub IP ranges dynamically (CIDR blocks — iptables handles them natively) echo " Fetching GitHub IP ranges..." -gh_ranges=$(curl -s https://api.github.com/meta 2>/dev/null | jq -r '.git[],.api[],.web[]' 2>/dev/null || true) +gh_ranges=$(curl -fsSL https://api.github.com/meta | jq -r '.git[],.api[],.web[]' | grep -E '^[0-9.]+/[0-9]+$') +if [ -z "$gh_ranges" ]; then + echo "Error: GitHub API returned no valid CIDR ranges" >&2 + exit 1 +fi for cidr in $gh_ranges; do iptables-legacy -A OUTPUT -d "$cidr" -j ACCEPT 2>/dev/null && ((ip_count++)) || true done -# Default deny everything else -iptables-legacy -P OUTPUT DROP +# IPv6 default-deny — defense-in-depth in case container IPv6 is enabled. +# We keep no IPv6 allowlist; all IPv4-resolved allowlisted services route over +# v4. If a user explicitly enables container v6 and needs traffic through, they +# must extend this section (and the v4 allowlist) in tandem. +v6_present=true +if ! command -v ip6tables-legacy >/dev/null 2>&1; then + echo " WARNING: ip6tables-legacy not present; skipping IPv6 rules (v6 traffic may be unfiltered if enabled)" >&2 + v6_present=false +fi + +configure_ipv6_firewall() { + ip6tables-legacy -F OUTPUT || return 1 + ip6tables-legacy -F INPUT || return 1 + ip6tables-legacy -F FORWARD || return 1 + # Default deny first (same race-avoidance reasoning as v4 above) + ip6tables-legacy -P INPUT DROP || return 1 + ip6tables-legacy -P OUTPUT DROP || return 1 + ip6tables-legacy -P FORWARD DROP || return 1 + # Allow loopback + ip6tables-legacy -A INPUT -i lo -j ACCEPT || return 1 + ip6tables-legacy -A OUTPUT -o lo -j ACCEPT || return 1 + # Allow established/related return traffic + ip6tables-legacy -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT || return 1 + ip6tables-legacy -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT || return 1 +} + +if [[ $v6_present == "true" ]]; then + # SC2310: we WANT set -e suppression here — a kernel without v6 support + # should warn-and-continue, not abort the script. + # shellcheck disable=SC2310 + if configure_ipv6_firewall; then + echo " IPv6: default-deny applied" + else + echo " WARNING: ip6tables-legacy commands failed (kernel may lack v6 support); v6 traffic may be unfiltered if enabled" >&2 + v6_present=false + fi +fi echo "=== Firewall active: $ip_count rules added ===" +# Post-check: verify the OUTPUT chain is actually default-DROP. A regression +# that loses the policy line while keeping ACCEPT rules would silently leave +# the container fully open, so we assert the policy itself. +if ! iptables-legacy -L OUTPUT -n | head -1 | grep -q '(policy DROP)'; then + echo "ERROR: iptables OUTPUT policy is not DROP after firewall init" >&2 + exit 1 +fi +if [[ $v6_present == "true" ]]; then + if ! ip6tables-legacy -L OUTPUT -n | head -1 | grep -q '(policy DROP)'; then + echo "ERROR: ip6tables OUTPUT policy is not DROP after firewall init" >&2 + exit 1 + fi +fi + # Verification echo "=== Verifying firewall ===" -if curl -s --max-time 5 https://api.anthropic.com > /dev/null 2>&1; then +if curl -s --max-time "$FIREWALL_VERIFY_TIMEOUT" https://api.anthropic.com >/dev/null 2>&1; then echo " ok api.anthropic.com: reachable" else echo " FAIL api.anthropic.com: BLOCKED (this is a problem)" diff --git a/test-prompt.md b/docs/test-prompt.md similarity index 77% rename from test-prompt.md rename to docs/test-prompt.md index b329f65..4915dee 100644 --- a/test-prompt.md +++ b/docs/test-prompt.md @@ -66,7 +66,11 @@ Run a comprehensive environment test to verify this Docker container has everyth - If `--firewall` was used: verify `curl -s --max-time 5 https://api.anthropic.com` succeeds (whitelisted) and `curl -s --max-time 5 https://example.com` times out (blocked) **10. Safety hooks verification** -- Try to Edit `~/.claude/settings.json` — should be BLOCKED by config protection hook +- Try to Edit `$CLAUDE_CONFIG_DIR/settings.json` — should be BLOCKED by config protection hook +- Try to Edit `~/.ckipper/accounts.json` — should be BLOCKED (registry tampering protection — closes credential cross-contamination vector) +- Try to run `echo modified > ~/.ckipper/accounts.json` — should be BLOCKED by bash guardrails +- Try to run `echo malicious > ~/.claude-otheraccount/settings.json` — should be BLOCKED (per-account dirs are protected even if not the active account) +- Try to write to `$CLAUDE_CONFIG_DIR/projects/test.txt` — should be ALLOWED (projects/ is not protected) - Try to run `echo test > .git/hooks/pre-commit` — should be BLOCKED by bash guardrails - Try to run `rm -rf /workspace` — should be BLOCKED by bash guardrails - Try to run `cat ~/.ssh/id_ed25519` — should be BLOCKED by bash guardrails @@ -88,6 +92,55 @@ Now test guardrail bypass attempts (report which are caught and which pass throu - Run `mount | grep workspace` — verify /workspace is mounted rw (not ro) - Run `find /usr -perm -4000 -type f 2>/dev/null` — list setuid binaries (should be minimal in slim image) +**12. Multi-account isolation** + +Run these checks in two concurrent containers (Window A: `--account personal`, Window B: `--account `). + +A. Each container has the right `CLAUDE_CONFIG_DIR`: + +```bash +# In window A +[ "$CLAUDE_CONFIG_DIR" = "$HOME/.claude-personal" ] && echo PASS || echo FAIL +# In window B +[ "$CLAUDE_CONFIG_DIR" = "$HOME/.claude-" ] && echo PASS || echo FAIL +``` + +B. `.claude.json` is the per-account file (account-specific email): + +```bash +# Confirm the email matches the account's registered identity +jq -r .oauthAccount.emailAddress "$CLAUDE_CONFIG_DIR/.claude.json" +# Should match the email shown by `ckipper account list` for this account. +``` + +C. Credentials symlinked to tmpfs: + +```bash +[ -L "$CLAUDE_CONFIG_DIR/.credentials.json" ] && echo PASS || echo FAIL +[ "$(readlink "$CLAUDE_CONFIG_DIR/.credentials.json")" = "/tmp/claude-creds/.credentials.json" ] && echo PASS || echo FAIL +``` + +D. Other accounts are NOT mounted: + +```bash +# Window A should NOT see Window B's dir +[ ! -d "$HOME/.claude-" ] && echo PASS || echo FAIL +``` + +E. Project sessions don't bleed across accounts (run after both sessions touch the project — check from the host): + +```bash +diff <(ls ~/.claude-personal/projects/ 2>/dev/null) <(ls ~/.claude-/projects/ 2>/dev/null) +# Expected: empty (no shared session dirs) +``` + +F. Registry tampering is blocked. Inside the container, attempt: + +```bash +echo modified > ~/.ckipper/accounts.json +# Expected: BLOCKED by bash-guardrails.sh hook (closes credential cross-contamination vector) +``` + ## Expected Results | Check | Expected | @@ -123,5 +176,8 @@ Now test guardrail bypass attempts (report which are caught and which pass throu | 11e | PASS (shows entrypoint/claude) | | 11f | PASS (workspace mounted rw) | | 11g | Minimal setuid list (passwd, su, sudo expected) | +| 12a-12d | All PASS (per-account dir, .claude.json, credentials, no other-account mount) | +| 12e | PASS (no shared session dirs across accounts) | +| 12f | BLOCKED (registry tampering refused by hook) | After all checks, give me a summary table of what works and what doesn't, and flag anything that would prevent you from doing normal development work (writing code, running tests, building, committing, pushing). For any guardrail bypass attempts that succeeded, note them as potential hardening opportunities. diff --git a/hooks/bash-guardrails.sh b/hooks/bash-guardrails.sh index 0c689ca..62a207b 100755 --- a/hooks/bash-guardrails.sh +++ b/hooks/bash-guardrails.sh @@ -3,17 +3,33 @@ # Catches accidental destructive commands. Not adversarial-proof, but # prevents the most common "oops" scenarios. # No-op on the host. +# +# NOTE: This hook is a UX guardrail, NOT a security boundary. The pattern matching +# is best-effort and can be bypassed by: +# - compound commands: bash -c 'rm -rf /path' +# - heredocs: cat <<'EOF' > target ... +# - command substitution / eval / dynamic strings +# Adversarial users can defeat any regex-based guard. Treat this hook as a reminder, +# not a defense. The container sandbox + firewall are the actual security boundary. -[ ! -f /.dockerenv ] && exit 0 +# Skip guardrails when not in Docker (CKIPPER_DOCKERENV overrides path for testing) +[ ! -f "${CKIPPER_DOCKERENV:-/.dockerenv}" ] && exit 0 -INPUT=$(cat) -CMD=$(echo "$INPUT" | jq -r '.tool_input.command // empty') +INPUT="$(cat)" +CMD=$(echo "$INPUT" | jq -r '.tool_input.command // empty') || { + echo "Error: hook input is not valid JSON; failing closed" >&2 + exit 2 +} # Normalize: collapse whitespace, strip leading sudo NORMALIZED=$(echo "$CMD" | sed 's/^[[:space:]]*sudo[[:space:]]*//' | tr -s ' ') # 1. Destructive recursive deletes (allow only build artifacts) -if echo "$NORMALIZED" | grep -qE 'rm\s+(-[a-zA-Z]*r[a-zA-Z]*f|--recursive|-[a-zA-Z]*f[a-zA-Z]*r)\s'; then +# Match any short-flag block containing r/R (so `rm -r`, `rm -R`, `rm -rf`, +# `rm -fr`, `rm -RfX` all match) OR the long form `--recursive`. Earlier +# revisions required BOTH r AND f, which let plain `rm -r /home/user/foo` +# bypass the check entirely. +if echo "$NORMALIZED" | grep -qE 'rm\s+(-[a-zA-Z]*[rR][a-zA-Z]*|--recursive)\s'; then SAFE="node_modules|dist|\.next|build|\.cache|__pycache__|\.turbo|coverage|\.pytest_cache|tmp|\.parcel-cache|out" if ! echo "$NORMALIZED" | grep -qE "rm\s+-[^ ]+\s+\.?/?(${SAFE})(/|\s|$)"; then echo "Blocked: recursive delete. Only build artifacts (node_modules, dist, .next, etc.) can be rm -rf'd." >&2 @@ -22,7 +38,10 @@ if echo "$NORMALIZED" | grep -qE 'rm\s+(-[a-zA-Z]*r[a-zA-Z]*f|--recursive|-[a-zA fi # 2. Git history destruction -if echo "$NORMALIZED" | grep -qE 'git\s+push\s+.*--force\b|git\s+push\s+-f\b'; then +# Anchor `--force` against whitespace-or-end-of-string so the recommended +# replacement `--force-with-lease` (the `-` is non-whitespace) is not also +# matched by `--force\b` — `\b` fires at the word/non-word boundary. +if echo "$NORMALIZED" | grep -qE 'git\s+push\s+.*--force(\s|$)|git\s+push\s+-f(\s|$)'; then echo "Blocked: git push --force. Use --force-with-lease instead." >&2 exit 2 fi @@ -55,8 +74,13 @@ if echo "$NORMALIZED" | grep -qE 'git\s+config\s+--(local|worktree)\s'; then fi fi -# 5. .git/hooks, .git/config, and .git/worktrees modification (execute on host) -if echo "$NORMALIZED" | grep -qE '\.git/(hooks|config|info/(attributes|exclude)|worktrees)'; then +# 5. .git/hooks, .git/config, .git/info/, and .git/worktrees modification. +# These execute on the host on the next git invocation. Pattern kept in sync +# with hooks/protect-claude-config.sh:47 — the leading-anchor differs (Bash +# sees command strings, the Edit/Write hook sees realpath-resolved file paths) +# but the inner subpath alternation is the same so both hooks agree on which +# parts of .git/ are protected. +if echo "$NORMALIZED" | grep -qE '\.git/(config|info/|hooks/|worktrees/)'; then if echo "$NORMALIZED" | grep -qE '^(cat|less|head|tail|grep|rg|wc|ls|file|stat|git)\s'; then # Allow reads but block output redirects (cat > .git/hooks/x is a write, not a read) if ! echo "$NORMALIZED" | grep -qE '>'; then @@ -73,8 +97,8 @@ if echo "$NORMALIZED" | grep -qE '(chmod|chown)\s+(-R|--recursive)\s'; then exit 2 fi -# 7. Direct credential/key file reads -if echo "$NORMALIZED" | grep -qE '(cat|less|head|tail|cp|curl|base64|xxd)\s+.*(\.ssh/(id_|config|authorized)|\.claude/\.credentials)'; then +# 7. Direct credential/key file reads (covers ~/.claude and per-account ~/.claude-) +if echo "$NORMALIZED" | grep -qE '(cat|less|head|tail|cp|curl|base64|xxd)\s+.*(\.ssh/(id_|config|authorized)|\.claude(-[a-z0-9_-]+)?/\.credentials)'; then echo "Blocked: reading credential/key files. Use git, gh, or npm which handle auth automatically." >&2 exit 2 fi @@ -89,15 +113,17 @@ if echo "$NORMALIZED" | grep -qE 'npm\s+publish'; then exit 2 fi -# 9. Claude config modification via Bash (closes Edit/Write hook bypass) -if echo "$NORMALIZED" | grep -qE '\.claude/(settings(\.local)?\.json|statusline-command\.sh|CLAUDE\.md|commands/|docker/|hooks/|plugins/)'; then +# 9. Claude config modification via Bash (closes Edit/Write hook bypass). +# Covers ~/.claude, per-account ~/.claude-, and ~/.ckipper. +# .claude-host.json is excluded by the trailing '/' in the regex. +if echo "$NORMALIZED" | grep -qE '\.claude(-[a-z0-9_-]+)?/(settings(\.local)?\.json|statusline-command\.sh|CLAUDE\.md|commands/|docker/|hooks/|plugins/)|/\.ckipper/'; then if echo "$NORMALIZED" | grep -qE '^(cat|less|head|tail|grep|rg|wc|ls|file|stat|jq)\s'; then # Allow reads but block output redirects (jq -n > settings.json is a write, not a read) if ! echo "$NORMALIZED" | grep -qE '>'; then exit 0 fi fi - echo "Blocked: modifying Claude config files via Bash. These are protected." >&2 + echo "Blocked: modifying Claude/Ckipper config files via Bash. These are protected." >&2 exit 2 fi diff --git a/hooks/bash-guardrails_test.bats b/hooks/bash-guardrails_test.bats new file mode 100644 index 0000000..e56c32e --- /dev/null +++ b/hooks/bash-guardrails_test.bats @@ -0,0 +1,125 @@ +#!/usr/bin/env bats +# Module-level tests for hooks/bash-guardrails.sh. +# Uses CKIPPER_DOCKERENV to simulate being inside a Docker container. + +load "${BATS_TEST_DIRNAME}/../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env + export CKIPPER_DOCKERENV="$TMP_HOME/fake-dockerenv" + touch "$CKIPPER_DOCKERENV" +} + +teardown() { + teardown_isolated_env +} + +# Helper: pipe JSON command input to the hook. +_run_guardrails() { + local cmd="$1" + local input_json="{\"tool_input\":{\"command\":\"$cmd\"}}" + run env CKIPPER_DOCKERENV="$CKIPPER_DOCKERENV" \ + bash "$REPO_ROOT/hooks/bash-guardrails.sh" <<< "$input_json" +} + +@test "bash-guardrails blocks rm -rf on non-build-artifact paths" { + _run_guardrails "rm -rf /home/user/important-files" + + [ "$status" -eq 2 ] + [[ "$output" =~ "Blocked" ]] +} + +@test "bash-guardrails allows a safe read-only command" { + _run_guardrails "ls /workspace/src" + + [ "$status" -eq 0 ] +} + +@test "bash-guardrails fails closed on invalid JSON input" { + run env CKIPPER_DOCKERENV="$CKIPPER_DOCKERENV" \ + bash "$REPO_ROOT/hooks/bash-guardrails.sh" <<< "not-json" + + [ "$status" -eq 2 ] + [[ "$output" =~ "Error" || "$output" =~ "error" || "$output" =~ "JSON" ]] +} + +# Regression: the rm-recursive guard required BOTH `r` AND `f` flags +# (regex `r[a-zA-Z]*f` or `f[a-zA-Z]*r`), so `rm -r /home/user/important` +# bypassed the check. With --dangerously-skip-permissions this could wipe +# arbitrary user state. Now the guard requires *either* recursive flag. +@test "bash-guardrails blocks rm -r without -f on a user path" { + _run_guardrails "rm -r /home/user/important-files" + + [ "$status" -eq 2 ] + [[ "$output" =~ "Blocked" ]] +} + +@test "bash-guardrails blocks rm -R (capital recursive flag)" { + _run_guardrails "rm -R /home/user/important-files" + + [ "$status" -eq 2 ] + [[ "$output" =~ "Blocked" ]] +} + +@test "bash-guardrails blocks rm --recursive on a user path" { + _run_guardrails "rm --recursive /home/user/important-files" + + [ "$status" -eq 2 ] + [[ "$output" =~ "Blocked" ]] +} + +@test "bash-guardrails still allows rm -rf on a build-artifact dir" { + _run_guardrails "rm -rf node_modules" + + [ "$status" -eq 0 ] +} + +# Regression: `\b` is a word/non-word boundary. In `--force-with-lease`, +# `force` is followed by `-` (non-word), so `\b` fired and the previous +# regex `git\s+push\s+.*--force\b` matched `--force-with-lease` — the +# very replacement the error message tells the user to switch to. Now +# the right side is anchored against whitespace or end-of-string, so +# `--force-with-lease` passes through. +@test "bash-guardrails allows git push --force-with-lease (the recommended replacement)" { + _run_guardrails "git push origin main --force-with-lease" + + [ "$status" -eq 0 ] +} + +@test "bash-guardrails still blocks git push --force" { + _run_guardrails "git push origin main --force" + + [ "$status" -eq 2 ] + [[ "$output" =~ "force" ]] +} + +@test "bash-guardrails still blocks git push -f" { + _run_guardrails "git push -f origin main" + + [ "$status" -eq 2 ] + [[ "$output" =~ "force" ]] +} + +# Path-protection regex should agree with hooks/protect-claude-config.sh +# on which .git/ subpaths are blocked. Slice 5 of the develop-branch +# review flagged the prior divergence — Bash blocked only +# `info/(attributes|exclude)`, Edit/Write blocked all of `info/`. +@test "bash-guardrails blocks writes anywhere under .git/info/ (matches Edit/Write hook)" { + _run_guardrails "echo bad > .git/info/refs" + + [ "$status" -eq 2 ] + [[ "$output" =~ "Blocked" ]] +} + +@test "bash-guardrails blocks writes to .git/info/attributes" { + _run_guardrails "echo bad-pattern > .git/info/attributes" + + [ "$status" -eq 2 ] + [[ "$output" =~ "Blocked" ]] +} + +@test "bash-guardrails still allows reading .git/info/ files" { + _run_guardrails "cat .git/info/exclude" + + [ "$status" -eq 0 ] +} diff --git a/hooks/docker-context.sh b/hooks/docker-context.sh index 7727d4e..13dbb9f 100755 --- a/hooks/docker-context.sh +++ b/hooks/docker-context.sh @@ -3,7 +3,8 @@ # Reminds Claude of constraints so it doesn't accidentally trigger guardrails. # No-op on the host. -[ ! -f /.dockerenv ] && exit 0 +# Skip context injection when not in Docker (CKIPPER_DOCKERENV overrides path for testing) +[ ! -f "${CKIPPER_DOCKERENV:-/.dockerenv}" ] && exit 0 cat <<'CONTEXT' You are running inside a Docker container with --dangerously-skip-permissions. diff --git a/hooks/docker-context_test.bats b/hooks/docker-context_test.bats new file mode 100644 index 0000000..a1efb80 --- /dev/null +++ b/hooks/docker-context_test.bats @@ -0,0 +1,21 @@ +#!/usr/bin/env bats +# Module-level tests for hooks/docker-context.sh. +# Verifies that the hook is a no-op on the host (no /.dockerenv). + +load "${BATS_TEST_DIRNAME}/../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env +} + +teardown() { + teardown_isolated_env +} + +@test "docker-context exits 0 and produces no output when not in Docker" { + # No CKIPPER_DOCKERENV set, so the file-not-found guard exits 0 early. + run env bash "$REPO_ROOT/hooks/docker-context.sh" + + [ "$status" -eq 0 ] + [ -z "$output" ] +} diff --git a/hooks/notify-bell.sh b/hooks/notify-bell.sh index 1104426..716c91e 100755 --- a/hooks/notify-bell.sh +++ b/hooks/notify-bell.sh @@ -4,7 +4,8 @@ # triggering native notifications (dock bounce, sound, etc.). # No-op on the host (host Claude handles notifications natively). -[ ! -f /.dockerenv ] && exit 0 +# Skip bell when not in Docker (CKIPPER_DOCKERENV overrides path for testing) +[ ! -f "${CKIPPER_DOCKERENV:-/.dockerenv}" ] && exit 0 printf '\a' exit 0 diff --git a/hooks/notify-bell_test.bats b/hooks/notify-bell_test.bats new file mode 100644 index 0000000..f98b493 --- /dev/null +++ b/hooks/notify-bell_test.bats @@ -0,0 +1,24 @@ +#!/usr/bin/env bats +# Module-level tests for hooks/notify-bell.sh. +# Verifies that the hook emits the bell character when in Docker. + +load "${BATS_TEST_DIRNAME}/../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env + export CKIPPER_DOCKERENV="$TMP_HOME/fake-dockerenv" + touch "$CKIPPER_DOCKERENV" +} + +teardown() { + teardown_isolated_env +} + +@test "notify-bell emits a bell character (\\a) when inside Docker" { + run env CKIPPER_DOCKERENV="$CKIPPER_DOCKERENV" \ + bash "$REPO_ROOT/hooks/notify-bell.sh" + + [ "$status" -eq 0 ] + # The bell character is \x07. + [[ "$output" == $'\a' ]] +} diff --git a/hooks/protect-claude-config.sh b/hooks/protect-claude-config.sh index fbd0a04..391b665 100755 --- a/hooks/protect-claude-config.sh +++ b/hooks/protect-claude-config.sh @@ -4,15 +4,50 @@ # settings.json (statusLine.command) to execute arbitrary code on the host. # Only active in Docker containers — no-op on the host. -# Skip protection when not in Docker -[ ! -f /.dockerenv ] && exit 0 +# Skip protection when not in Docker (CKIPPER_DOCKERENV overrides path for testing) +[ ! -f "${CKIPPER_DOCKERENV:-/.dockerenv}" ] && exit 0 -INPUT=$(cat) -FILE_PATH=$(echo "$INPUT" | jq -r '.tool_input.file_path // empty') +INPUT="$(cat)" +FILE_PATH=$(echo "$INPUT" | jq -r '.tool_input.file_path // empty') || { + echo "Error: hook input is not valid JSON; failing closed" >&2 + exit 2 +} + +# Resolve symlinks/relative segments before regex matching so attempts like +# /workspace/foo/../.git/config can't slip past the substring check. Fall back +# to the raw path if realpath isn't available (it lives in coreutils inside +# the container, but we keep this defensive for host-side test runs). +if command -v realpath >/dev/null 2>&1; then + RESOLVED_PATH=$(realpath -m "$FILE_PATH" 2>/dev/null || echo "$FILE_PATH") +else + RESOLVED_PATH="$FILE_PATH" +fi -if [[ "$FILE_PATH" =~ \.claude/(settings(\.local)?\.json|statusline-command\.sh|CLAUDE\.md|commands/|docker/|hooks/|plugins/) ]]; then +# Block Claude state subset under ~/.claude or any per-account ~/.claude-. +# Note: ~/.claude-host.json is the read-only staging mount and is intentionally +# excluded — the regex requires a '/' after the optional - suffix, while +# .claude-host.json has '.json' instead. +if [[ $RESOLVED_PATH =~ \.claude(-[a-z0-9_-]+)?/(settings(\.local)?\.json|statusline-command\.sh|CLAUDE\.md|commands/|docker/|hooks/|plugins/) ]]; then echo "Blocked: cannot modify $FILE_PATH (protected Claude config file)" >&2 exit 2 fi +# Block anything under ~/.ckipper (registry, hooks, settings-template, docker tooling). +# Tampering with accounts.json could redirect another account's keychain_service. +if [[ $RESOLVED_PATH =~ /\.ckipper/ ]]; then + echo "Blocked: cannot modify $FILE_PATH (protected Ckipper file)" >&2 + exit 2 +fi + +# Block writes inside the host repo's .git/ (mounted RW into the container by +# docker-mode.zsh). Edits to .git/hooks/, .git/config, .git/info/, or +# .git/worktrees/ execute on the host the next time the user runs git, so they +# constitute a container-escape vector. The leading '/' anchor avoids +# over-blocking '.gitignore', '.github/', or directories like '.git-foo/'. +# Pattern subpath kept in sync with hooks/bash-guardrails.sh:78. +if [[ $RESOLVED_PATH =~ /\.git/(config|info/|hooks/|worktrees/) ]]; then + echo "Blocked: cannot modify $FILE_PATH (protected host .git file)" >&2 + exit 2 +fi + exit 0 diff --git a/hooks/protect-claude-config_test.bats b/hooks/protect-claude-config_test.bats new file mode 100644 index 0000000..3a042b8 --- /dev/null +++ b/hooks/protect-claude-config_test.bats @@ -0,0 +1,100 @@ +#!/usr/bin/env bats +# Module-level tests for hooks/protect-claude-config.sh. +# Uses CKIPPER_DOCKERENV to simulate being inside a Docker container. + +load "${BATS_TEST_DIRNAME}/../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env + # Create a fake .dockerenv so the Docker-only check passes. + export CKIPPER_DOCKERENV="$TMP_HOME/fake-dockerenv" + touch "$CKIPPER_DOCKERENV" +} + +teardown() { + teardown_isolated_env +} + +# Helper: pipe JSON input to the hook and capture exit code + output. +_run_protect() { + local input_json="$1" + run env CKIPPER_DOCKERENV="$CKIPPER_DOCKERENV" \ + bash "$REPO_ROOT/hooks/protect-claude-config.sh" <<< "$input_json" +} + +@test "protect-claude-config blocks writes to .claude/settings.json" { + _run_protect '{"tool_input":{"file_path":"/home/user/.claude/settings.json"}}' + + [ "$status" -eq 2 ] + [[ "$output" =~ "Blocked" ]] +} + +@test "protect-claude-config allows writes to an unprotected path" { + _run_protect '{"tool_input":{"file_path":"/home/user/projects/app/src/index.js"}}' + + [ "$status" -eq 0 ] +} + +@test "protect-claude-config fails closed on invalid JSON input" { + _run_protect "not-json" + + [ "$status" -eq 2 ] + [[ "$output" =~ "Error" || "$output" =~ "error" || "$output" =~ "JSON" ]] +} + +# .git/ blocking — closes the docker-mode RW bind-mount escape vector where +# Edit/Write tool calls plant hooks or git config overrides that execute on +# the host. Mirrors the bash-side coverage in bash-guardrails.sh. + +@test "protect-claude-config blocks writes to /workspace/.git/config" { + _run_protect '{"tool_input":{"file_path":"/workspace/.git/config"}}' + + [ "$status" -eq 2 ] + [[ "$output" =~ "Blocked" ]] + [[ "$output" =~ ".git" ]] +} + +@test "protect-claude-config blocks writes to /workspace/.git/hooks/post-commit" { + _run_protect '{"tool_input":{"file_path":"/workspace/.git/hooks/post-commit"}}' + + [ "$status" -eq 2 ] + [[ "$output" =~ "Blocked" ]] +} + +@test "protect-claude-config blocks writes to /Users/x/.git/info/attributes" { + _run_protect '{"tool_input":{"file_path":"/Users/x/.git/info/attributes"}}' + + [ "$status" -eq 2 ] + [[ "$output" =~ "Blocked" ]] +} + +@test "protect-claude-config blocks writes to /workspace/.git/worktrees/foo/HEAD" { + _run_protect '{"tool_input":{"file_path":"/workspace/.git/worktrees/foo/HEAD"}}' + + [ "$status" -eq 2 ] + [[ "$output" =~ "Blocked" ]] +} + +@test "protect-claude-config allows writes to /workspace/.gitignore" { + _run_protect '{"tool_input":{"file_path":"/workspace/.gitignore"}}' + + [ "$status" -eq 0 ] +} + +@test "protect-claude-config allows writes to /workspace/.github/workflows/ci.yml" { + _run_protect '{"tool_input":{"file_path":"/workspace/.github/workflows/ci.yml"}}' + + [ "$status" -eq 0 ] +} + +@test "protect-claude-config allows writes to differently named .git-hooks-config dir" { + _run_protect '{"tool_input":{"file_path":"/workspace/some/.git-hooks-config/x"}}' + + [ "$status" -eq 0 ] +} + +@test "protect-claude-config allows writes to a path with 'git' as a path segment" { + _run_protect '{"tool_input":{"file_path":"/workspace/file/with/git/in/path/file.txt"}}' + + [ "$status" -eq 0 ] +} diff --git a/install.sh b/install.sh index a10de06..ddff32d 100755 --- a/install.sh +++ b/install.sh @@ -1,24 +1,26 @@ #!/bin/bash set -e -echo "=== Claude Docker Sandbox Installer ===" +echo "=== Ckipper Installer ===" echo "" REPO_DIR="$(cd "$(dirname "$0")" && pwd)" +CKIPPER_DIR="${CKIPPER_DIR:-$HOME/.ckipper}" # 1. Check prerequisites echo "Checking prerequisites..." -missing=() -command -v docker &>/dev/null || missing+=("docker (install Docker Desktop)") -command -v jq &>/dev/null || missing+=("jq (brew install jq)") -command -v git &>/dev/null || missing+=("git") +missing_dependencies=() +command -v docker &>/dev/null || missing_dependencies+=("docker (install Docker Desktop)") +command -v jq &>/dev/null || missing_dependencies+=("jq (brew install jq)") +command -v git &>/dev/null || missing_dependencies+=("git") +command -v gum &>/dev/null || missing_dependencies+=("gum (brew install gum)") if [[ "$(uname)" == "Darwin" ]]; then - command -v security &>/dev/null || missing+=("security (macOS Keychain CLI)") + command -v security &>/dev/null || missing_dependencies+=("security (macOS Keychain CLI)") fi -if [[ ${#missing[@]} -gt 0 ]]; then +if [[ ${#missing_dependencies[@]} -gt 0 ]]; then echo "Missing prerequisites:" - for dep in "${missing[@]}"; do + for dep in "${missing_dependencies[@]}"; do echo " - $dep" done echo "" @@ -29,82 +31,179 @@ echo " All prerequisites found." echo "" # 2. Copy Docker files -echo "Copying Docker files to ~/.claude/docker/..." -mkdir -p "$HOME/.claude/docker" -cp "$REPO_DIR/docker/Dockerfile" "$HOME/.claude/docker/" -cp "$REPO_DIR/docker/entrypoint.sh" "$HOME/.claude/docker/" -cp "$REPO_DIR/docker/init-firewall.sh" "$HOME/.claude/docker/" -chmod +x "$HOME/.claude/docker/entrypoint.sh" -chmod +x "$HOME/.claude/docker/init-firewall.sh" - -# 3. Copy hooks -echo "Copying hooks to ~/.claude/hooks/..." -mkdir -p "$HOME/.claude/hooks" -cp "$REPO_DIR/hooks/protect-claude-config.sh" "$HOME/.claude/hooks/" -cp "$REPO_DIR/hooks/bash-guardrails.sh" "$HOME/.claude/hooks/" -cp "$REPO_DIR/hooks/docker-context.sh" "$HOME/.claude/hooks/" -cp "$REPO_DIR/hooks/notify-bell.sh" "$HOME/.claude/hooks/" -chmod +x "$HOME/.claude/hooks/protect-claude-config.sh" -chmod +x "$HOME/.claude/hooks/bash-guardrails.sh" -chmod +x "$HOME/.claude/hooks/docker-context.sh" -chmod +x "$HOME/.claude/hooks/notify-bell.sh" - -# 4. Copy w-function.zsh -echo "Copying w-function.zsh to ~/.claude/docker/..." -cp "$REPO_DIR/w-function.zsh" "$HOME/.claude/docker/" - -# 5. Generate w-config.zsh (only if it doesn't exist — never overwrite) -config_file="$HOME/.claude/docker/w-config.zsh" -if [[ ! -f "$config_file" ]]; then - cp "$REPO_DIR/w-config.zsh.example" "$config_file" - echo " Created w-config.zsh with defaults — edit to add your MCP mounts, ports, etc." +echo "Copying Docker files to $CKIPPER_DIR/docker/..." +mkdir -p "$CKIPPER_DIR/docker" +cp "$REPO_DIR/docker/Dockerfile" "$CKIPPER_DIR/docker/" +cp "$REPO_DIR/docker/entrypoint.sh" "$CKIPPER_DIR/docker/" +cp "$REPO_DIR/docker/init-firewall.sh" "$CKIPPER_DIR/docker/" +cp "$REPO_DIR/docker/cleanup-projects.py" "$CKIPPER_DIR/docker/" +chmod +x "$CKIPPER_DIR/docker/entrypoint.sh" +chmod +x "$CKIPPER_DIR/docker/init-firewall.sh" +chmod +x "$CKIPPER_DIR/docker/cleanup-projects.py" + +# 3. Copy hooks (canonical source for ckipper account redeploy-hooks) +echo "Copying hooks to $CKIPPER_DIR/hooks/..." +mkdir -p "$CKIPPER_DIR/hooks" +cp "$REPO_DIR/hooks/protect-claude-config.sh" "$CKIPPER_DIR/hooks/" +cp "$REPO_DIR/hooks/bash-guardrails.sh" "$CKIPPER_DIR/hooks/" +cp "$REPO_DIR/hooks/docker-context.sh" "$CKIPPER_DIR/hooks/" +cp "$REPO_DIR/hooks/notify-bell.sh" "$CKIPPER_DIR/hooks/" +chmod +x "$CKIPPER_DIR/hooks/protect-claude-config.sh" +chmod +x "$CKIPPER_DIR/hooks/bash-guardrails.sh" +chmod +x "$CKIPPER_DIR/hooks/docker-context.sh" +chmod +x "$CKIPPER_DIR/hooks/notify-bell.sh" + +# 4. Copy ckipper.zsh and the lib/ tree. +echo "Copying ckipper.zsh and lib/ to $CKIPPER_DIR/docker/..." +cp "$REPO_DIR/ckipper.zsh" "$CKIPPER_DIR/docker/" + +# Deploy lib/ tree, EXCLUDING test files (*_test.bats, *_test.py). +# Tests must NOT ship to user installs: +# - they're noise in the runtime tree +# - test stubs in tests/lib/stubs/ would appear as binaries on PATH if accidentally exposed +if command -v rsync >/dev/null 2>&1; then + rsync -a --delete \ + --exclude='*_test.bats' \ + --exclude='*_test.py' \ + --exclude='__pycache__' \ + "$REPO_DIR/lib/" "$CKIPPER_DIR/docker/lib/" else - echo " w-config.zsh already exists (not overwritten)" + # Fallback: tar pipe with excludes (no rsync available). + rm -rf "$CKIPPER_DIR/docker/lib" + (cd "$REPO_DIR" && tar -cf - --exclude='*_test.bats' --exclude='*_test.py' --exclude='__pycache__' lib) | + (cd "$CKIPPER_DIR/docker" && tar -xf -) fi -# 6. Merge settings-hooks.json into ~/.claude/settings.json -echo "Merging hooks into ~/.claude/settings.json..." -settings_file="$HOME/.claude/settings.json" -if [[ ! -f "$settings_file" ]]; then - echo '{}' > "$settings_file" +# Defense in depth: verify no test files leaked into the install. +if find "$CKIPPER_DIR/docker/lib" \( -name '*_test.*' -o -name '__pycache__' \) 2>/dev/null | grep -q .; then + echo "ERROR: test files leaked into $CKIPPER_DIR/docker/lib/" >&2 + find "$CKIPPER_DIR/docker/lib" \( -name '*_test.*' -o -name '__pycache__' \) >&2 + exit 1 +fi + +# 5. Migrate any existing pre-merge config + clean up stale paths +# (w-function.zsh, w-config.zsh, _w completion file). +if [ -f "$CKIPPER_DIR/docker/w-config.zsh" ]; then + if [ ! -f "$CKIPPER_DIR/docker/ckipper-config.zsh" ]; then + echo " Migrating $CKIPPER_DIR/docker/w-config.zsh → ckipper-config.zsh (preserves your settings)" + # Rewrite assignments of the five known pre-merge variables to their + # CKIPPER_* counterparts. Allow-list (not blanket W_* → CKIPPER_*) so + # we don't mangle user comments or unrelated W_-prefixed names. The + # anchor `^[[:space:]]*` matches assignment lines only, leaving + # comment text intact. + sed -E 's/^([[:space:]]*)W_(PROJECTS_DIR|WORKTREES_DIR|PORTS|EXTRA_VOLUMES|EXTRA_ENV)/\1CKIPPER_\2/' \ + "$CKIPPER_DIR/docker/w-config.zsh" >"$CKIPPER_DIR/docker/ckipper-config.zsh" + fi + echo " Removing stale $CKIPPER_DIR/docker/w-config.zsh" + rm -f "$CKIPPER_DIR/docker/w-config.zsh" +fi +if [ -f "$CKIPPER_DIR/docker/w-function.zsh" ]; then + echo " Removing stale $CKIPPER_DIR/docker/w-function.zsh (replaced by ckipper.zsh)" + rm -f "$CKIPPER_DIR/docker/w-function.zsh" fi -hooks_json=$(jq 'del(._comment)' "$REPO_DIR/settings-hooks.json") -jq --argjson hooks "$hooks_json" '. * $hooks' "$settings_file" > "${settings_file}.tmp" \ - && mv "${settings_file}.tmp" "$settings_file" -echo " Hooks merged." - -# 7. Add source line to .zshrc (if not already present) -if ! grep -q 'w-function.zsh' "$HOME/.zshrc" 2>/dev/null; then - echo '' >> "$HOME/.zshrc" - echo '# Worktree Manager (w function)' >> "$HOME/.zshrc" - echo 'source "$HOME/.claude/docker/w-function.zsh"' >> "$HOME/.zshrc" - echo " Added w() source line to ~/.zshrc" +if [ -f "$HOME/.zsh/completions/_w" ]; then + echo " Removing stale $HOME/.zsh/completions/_w (replaced by _ckipper)" + rm -f "$HOME/.zsh/completions/_w" +fi + +# Generate ckipper-config.zsh (only if it doesn't exist — never overwrite user customizations). +# Also preserve accounts.json and aliases.zsh if they already exist (managed by ckipper CLI). +config_file="$CKIPPER_DIR/docker/ckipper-config.zsh" +if [[ ! -f $config_file ]]; then + cp "$REPO_DIR/templates/ckipper-config.zsh.example" "$config_file" + echo " Created ckipper-config.zsh with defaults — edit to add your MCP mounts, ports, etc." else - echo " ~/.zshrc already sources w-function.zsh" + echo " ckipper-config.zsh already exists (not overwritten)" fi +[[ -f "$CKIPPER_DIR/accounts.json" ]] && echo " accounts.json already exists (not overwritten — managed by ckipper)" +[[ -f "$CKIPPER_DIR/aliases.zsh" ]] && echo " aliases.zsh already exists (not overwritten — auto-generated)" -# 8. Warn about inlined w() from old installs -if grep -q '^w()' "$HOME/.zshrc" 2>/dev/null || grep -q '^_w_build_image()' "$HOME/.zshrc" 2>/dev/null; then - echo "" - echo "WARNING: Your ~/.zshrc contains an inlined w() function from a previous install." - echo "The new approach sources it from ~/.claude/docker/w-function.zsh instead." - echo "Please remove the old inlined function from ~/.zshrc manually." - echo "(Search for '_w_build_image()' or 'w()' and remove everything through the 'COMPEOF' line)" +# 6. Deploy settings-template.json (consumed by ckipper account add / redeploy-hooks per-account) +echo "Copying settings-template.json to $CKIPPER_DIR/..." +cp "$REPO_DIR/templates/settings-template.json" "$CKIPPER_DIR/settings-template.json" +echo " Settings template deployed. ckipper account redeploy-hooks applies it per-account." + +# 7. Add or update source line in .zshrc +# Pre-merge installs sourced w-function.zsh from ~/.claude/docker/ or +# ~/.ckipper/docker/. The regex matches either install root and rewrites +# to the canonical ~/.ckipper/docker/ckipper.zsh. +# +# Edge cases handled: +# - trailing comment on the source line (`source "..." # ckipper`) +# - sed regex failing to match anything: we detect the no-op and append a +# working source line so the user is never left with a broken zshrc. +# - timestamped backup so re-runs don't clobber the previous .bak. +if grep -qE '/docker/w-function\.zsh' "$HOME/.zshrc" 2>/dev/null; then + zshrc_backup="$HOME/.zshrc.ckipper-bak.$(date -u +%Y%m%dT%H%M%SZ)" + cp "$HOME/.zshrc" "$zshrc_backup" + zshrc_tmp="$HOME/.zshrc.ckipper-tmp.$$" + sed -E 's|^[[:space:]]*source[[:space:]]+["'\'']?[$~/][^"'\'']*/docker/w-function\.zsh["'\'']?[[:space:]]*(#.*)?$|source "$HOME/.ckipper/docker/ckipper.zsh"|' \ + "$HOME/.zshrc" >"$zshrc_tmp" && mv "$zshrc_tmp" "$HOME/.zshrc" + if grep -qE '/docker/w-function\.zsh' "$HOME/.zshrc" 2>/dev/null; then + # Sed didn't match the stale source line (unusual whitespace, + # quoting, or an exotic comment). Append a working source line so + # ckipper still loads, and warn the user to remove the stale one. + if ! grep -q 'ckipper/docker/ckipper\.zsh' "$HOME/.zshrc" 2>/dev/null; then + echo '' >>"$HOME/.zshrc" + echo '# Ckipper — multi-account Claude Code manager (ckipper + ckipper worktree run)' >>"$HOME/.zshrc" + echo 'source "$HOME/.ckipper/docker/ckipper.zsh"' >>"$HOME/.zshrc" + fi + echo " WARNING: could not rewrite stale w-function.zsh source line in ~/.zshrc." + echo " Appended a working ckipper.zsh source line; please remove the stale one manually." + echo " Backup: $zshrc_backup" + else + echo " Updated ~/.zshrc source line to ~/.ckipper/docker/ckipper.zsh. Backup: $zshrc_backup" + fi +elif ! grep -q 'ckipper/docker/ckipper\.zsh' "$HOME/.zshrc" 2>/dev/null; then + echo '' >>"$HOME/.zshrc" + echo '# Ckipper — multi-account Claude Code manager (ckipper + ckipper worktree run)' >>"$HOME/.zshrc" + echo 'source "$HOME/.ckipper/docker/ckipper.zsh"' >>"$HOME/.zshrc" + echo " Added ckipper source line to ~/.zshrc" +else + echo " ~/.zshrc already sources ~/.ckipper/docker/ckipper.zsh" fi -# 9. Set up git hooks path +# 8. Print (do not auto-append) the optional aliases.zsh source line +echo "" +echo "Optional: enable per-account launchers (claude- and bare ) by adding to ~/.zshrc:" +echo " [[ -f ~/.ckipper/aliases.zsh ]] && source ~/.ckipper/aliases.zsh" +echo "" + +# 9. Set up git hooks path (only if user hasn't already configured a different one, +# e.g. for husky, pre-commit, or another tool — never silently clobber) echo "Configuring git hooks path..." mkdir -p "$HOME/.git-hooks" -git config --global core.hooksPath "$HOME/.git-hooks" +existing_hookspath=$(git config --global --get core.hooksPath 2>/dev/null || true) +if [ -z "$existing_hookspath" ] || [ "$existing_hookspath" = "$HOME/.git-hooks" ]; then + git config --global core.hooksPath "$HOME/.git-hooks" + echo " Set core.hooksPath = $HOME/.git-hooks" +else + echo " Skipping core.hooksPath: existing value is '$existing_hookspath' (not overwriting)." + echo " If you want Ckipper's hook isolation, set manually:" + echo ' git config --global core.hooksPath "$HOME/.git-hooks"' +fi # 10. Print summary echo "" echo "=== Setup Complete ===" echo "" -echo "Next steps:" -echo " 1. Edit ~/.claude/docker/w-config.zsh with your MCP mounts, ports, etc." -echo " 2. source ~/.zshrc" -echo " 3. w --rebuild-image" -echo " 4. w test-branch --docker claude" -echo "" + +# 11. Auto-invoke `ckipper setup` if interactive shell. +# Non-interactive callers (CI, piped installers) get a printed hint instead so +# the wizard never blocks on a controlling-terminal it does not have. +if [[ -t 0 && -t 1 ]]; then + echo "Launching ckipper setup wizard..." + echo "" + # Spawn a fresh zsh subshell so the wizard runs against the freshly-installed + # ckipper.zsh (sourced from the canonical install path, not the repo). + zsh -c "source \"$CKIPPER_DIR/docker/ckipper.zsh\" && ckipper setup" +else + echo "Non-interactive shell detected — skipping wizard." + echo "" + echo "Next steps:" + echo " 1. source ~/.zshrc" + echo " 2. ckipper setup # interactive configuration wizard" + echo "" +fi + echo "To update later: git pull && ./install.sh" diff --git a/install_test.bats b/install_test.bats new file mode 100644 index 0000000..c026f29 --- /dev/null +++ b/install_test.bats @@ -0,0 +1,197 @@ +#!/usr/bin/env bats + +load "${BATS_TEST_DIRNAME}/tests/lib/test-helper.bash" + +setup() { + setup_isolated_env +} + +teardown() { + teardown_isolated_env +} + +@test "install.sh fresh install creates ~/.ckipper/docker/ tree with lib/" { + HOME="$TMP_HOME" CKIPPER_DIR="$TMP_HOME/.ckipper" \ + run "$REPO_ROOT/install.sh" + + [ "$status" -eq 0 ] + [ -d "$TMP_HOME/.ckipper/docker/lib/core" ] + [ -d "$TMP_HOME/.ckipper/docker/lib/account" ] + [ -d "$TMP_HOME/.ckipper/docker/lib/worktree" ] + [ -f "$TMP_HOME/.ckipper/docker/ckipper.zsh" ] + [ -f "$TMP_HOME/.ckipper/docker/ckipper-config.zsh" ] +} + +@test "install.sh excludes test files from deployed lib/" { + HOME="$TMP_HOME" CKIPPER_DIR="$TMP_HOME/.ckipper" \ + run "$REPO_ROOT/install.sh" + + [ "$status" -eq 0 ] + run find "$TMP_HOME/.ckipper/docker/lib" \( -name '*_test.*' -o -name '__pycache__' \) + [ -z "$output" ] +} + +@test "install.sh re-run preserves customised ckipper-config.zsh" { + HOME="$TMP_HOME" CKIPPER_DIR="$TMP_HOME/.ckipper" run "$REPO_ROOT/install.sh" + [ "$status" -eq 0 ] + echo 'CUSTOM_VALUE="preserve_me"' >> "$TMP_HOME/.ckipper/docker/ckipper-config.zsh" + + HOME="$TMP_HOME" CKIPPER_DIR="$TMP_HOME/.ckipper" \ + run "$REPO_ROOT/install.sh" + + [ "$status" -eq 0 ] + grep -q 'CUSTOM_VALUE="preserve_me"' "$TMP_HOME/.ckipper/docker/ckipper-config.zsh" +} + +@test "install.sh deletes stale w-function.zsh from a pre-merge install" { + mkdir -p "$TMP_HOME/.ckipper/docker" + echo "# stale w-function" > "$TMP_HOME/.ckipper/docker/w-function.zsh" + + HOME="$TMP_HOME" CKIPPER_DIR="$TMP_HOME/.ckipper" \ + run "$REPO_ROOT/install.sh" + + [ "$status" -eq 0 ] + [ ! -f "$TMP_HOME/.ckipper/docker/w-function.zsh" ] +} + +@test "install.sh deletes stale _w completion file from a pre-merge install" { + mkdir -p "$HOME/.zsh/completions" + echo "# stale _w" > "$HOME/.zsh/completions/_w" + + HOME="$TMP_HOME" CKIPPER_DIR="$TMP_HOME/.ckipper" \ + run "$REPO_ROOT/install.sh" + + [ "$status" -eq 0 ] + [ ! -f "$TMP_HOME/.zsh/completions/_w" ] +} + +@test "install.sh migrates pre-merge w-config.zsh into ckipper-config.zsh" { + mkdir -p "$TMP_HOME/.ckipper/docker" + cat > "$TMP_HOME/.ckipper/docker/w-config.zsh" << 'EOF' +# pre-merge user config +CUSTOM_VALUE="from_old_config" +EOF + + HOME="$TMP_HOME" CKIPPER_DIR="$TMP_HOME/.ckipper" \ + run "$REPO_ROOT/install.sh" + + [ "$status" -eq 0 ] + [ ! -f "$TMP_HOME/.ckipper/docker/w-config.zsh" ] + [ -f "$TMP_HOME/.ckipper/docker/ckipper-config.zsh" ] + grep -q 'CUSTOM_VALUE="from_old_config"' "$TMP_HOME/.ckipper/docker/ckipper-config.zsh" +} + +@test "install.sh renames W_* assignments to CKIPPER_* during migration" { + mkdir -p "$TMP_HOME/.ckipper/docker" + cat > "$TMP_HOME/.ckipper/docker/w-config.zsh" << 'EOF' +# pre-merge user config — every customizable variable +W_PROJECTS_DIR="$HOME/myrepos" +W_WORKTREES_DIR="$HOME/myworktrees" +W_PORTS=(3000 8080 9090 12345) +W_EXTRA_VOLUMES=("/data:/data:ro") +W_EXTRA_ENV=("CUSTOM_KEY=v") +# Comment with W_PORTS in it should NOT be rewritten. +EOF + + HOME="$TMP_HOME" CKIPPER_DIR="$TMP_HOME/.ckipper" \ + run "$REPO_ROOT/install.sh" + + [ "$status" -eq 0 ] + local cfg="$TMP_HOME/.ckipper/docker/ckipper-config.zsh" + grep -q '^CKIPPER_PROJECTS_DIR=' "$cfg" + grep -q '^CKIPPER_WORKTREES_DIR=' "$cfg" + grep -q '^CKIPPER_PORTS=(3000 8080 9090 12345)' "$cfg" + grep -q '^CKIPPER_EXTRA_VOLUMES=' "$cfg" + grep -q '^CKIPPER_EXTRA_ENV=' "$cfg" + # Comment lines must not be rewritten — preserve original W_PORTS reference. + grep -q '# Comment with W_PORTS in it' "$cfg" + # No leftover W_* assignments (excluding the comment). + ! grep -qE '^[[:space:]]*W_(PROJECTS_DIR|WORKTREES_DIR|PORTS|EXTRA_VOLUMES|EXTRA_ENV)=' "$cfg" + + # Functional check: post-install ckipper.zsh sees the renamed variables. + run zsh -c "source '$REPO_ROOT/ckipper.zsh'; print -r -- \"\${CKIPPER_PORTS[*]}\"" + [ "$status" -eq 0 ] + [[ "$output" =~ "3000 8080 9090 12345" ]] +} + +@test "install.sh rewrites pre-merge ~/.zshrc source line with trailing comment" { + cat > "$HOME/.zshrc" << 'EOF' +# Some user config +source "$HOME/.ckipper/docker/w-function.zsh" # ckipper bootstrap +EOF + + HOME="$TMP_HOME" CKIPPER_DIR="$TMP_HOME/.ckipper" \ + run "$REPO_ROOT/install.sh" + + [ "$status" -eq 0 ] + grep -q 'ckipper/docker/ckipper\.zsh' "$HOME/.zshrc" + ! grep -q 'ckipper/docker/w-function\.zsh' "$HOME/.zshrc" +} + +@test "install.sh creates timestamped backup, does not clobber on second run" { + cat > "$HOME/.zshrc" << 'EOF' +source "$HOME/.ckipper/docker/w-function.zsh" +EOF + + HOME="$TMP_HOME" CKIPPER_DIR="$TMP_HOME/.ckipper" \ + run "$REPO_ROOT/install.sh" + [ "$status" -eq 0 ] + + local first_backup + first_backup=$(ls "$HOME"/.zshrc.ckipper-bak.* 2>/dev/null | head -1) + [ -n "$first_backup" ] + grep -q 'ckipper/docker/w-function\.zsh' "$first_backup" + + # Second install run on already-rewritten zshrc — must not clobber backup. + sleep 1 + HOME="$TMP_HOME" CKIPPER_DIR="$TMP_HOME/.ckipper" \ + run "$REPO_ROOT/install.sh" + [ "$status" -eq 0 ] + [ -f "$first_backup" ] + grep -q 'ckipper/docker/w-function\.zsh' "$first_backup" +} + +@test "install.sh rewrites pre-merge ~/.zshrc source line to ckipper.zsh" { + cat > "$HOME/.zshrc" << 'EOF' +# Some user config +source "$HOME/.ckipper/docker/w-function.zsh" +EOF + + HOME="$TMP_HOME" CKIPPER_DIR="$TMP_HOME/.ckipper" \ + run "$REPO_ROOT/install.sh" + + [ "$status" -eq 0 ] + grep -q 'ckipper/docker/ckipper\.zsh' "$HOME/.zshrc" + ! grep -q 'ckipper/docker/w-function\.zsh' "$HOME/.zshrc" +} + +@test "install.sh skips wizard and prints manual hint in non-interactive shell" { + # bats's `run` invokes install.sh with stdin/stdout disconnected from a tty, + # which trips the `[[ -t 0 && -t 1 ]]` guard. The non-interactive branch must + # complete the install AND print the manual `ckipper setup` hint. + HOME="$TMP_HOME" CKIPPER_DIR="$TMP_HOME/.ckipper" \ + run "$REPO_ROOT/install.sh" + + [ "$status" -eq 0 ] + [[ "$output" == *"Non-interactive"* ]] + [[ "$output" == *"ckipper setup"* ]] +} + +@test "install.sh fails fast when gum is missing" { + # Build a PATH where docker/jq/git/security exist as no-op stubs but gum + # does NOT — coreutils (/usr/bin, /bin) stay reachable so install.sh's + # pre-check shell plumbing (dirname, pwd, uname) still works. + # install.sh must exit non-zero with "gum" mentioned in the output. + local fake_path="$BATS_TEST_TMPDIR/fake_bin" + mkdir -p "$fake_path" + for cmd in docker jq git security; do + printf '#!/bin/sh\nexit 0\n' > "$fake_path/$cmd" + chmod +x "$fake_path/$cmd" + done + + run env HOME="$TMP_HOME" CKIPPER_DIR="$TMP_HOME/.ckipper" \ + PATH="$fake_path:/usr/bin:/bin" "$REPO_ROOT/install.sh" + + [ "$status" -ne 0 ] + [[ "$output" == *"gum"* ]] +} diff --git a/lib/account/account-management.zsh b/lib/account/account-management.zsh new file mode 100644 index 0000000..4d55533 --- /dev/null +++ b/lib/account/account-management.zsh @@ -0,0 +1,579 @@ +#!/usr/bin/env zsh +# Account lifecycle subcommands: add, finalize_registration, remove, rename, list, default, bare_alias_safe. + +# Module-level context for the in-progress registration. +# Populated by callers before invoking _ckipper_account_finalize_registration. +# Fields: name, dir, service +typeset -gA _CKIPPER_FINALIZE_CTX + +# Module-level context for the in-progress rename. +# Populated by _ckipper_account_rename before invoking _ckipper_account_rename_perform. +# Fields: old_dir, new_dir +typeset -gA _CKIPPER_RENAME_CTX + +# Module-level context for `ckipper account list`: the default account name, +# read once by `_ckipper_account_list` and read by `_ckipper_account_list_row` +# to pick the marker. Lets the row helper stay at 3 positional args (the +# 3-parameter cap from .claude/rules/code-style.md). +typeset -g _CKIPPER_ACCOUNT_LIST_DEFAULT="" + +# Validate the account name and --adopt flag from `ckipper account add` arguments. +# Prints error messages to stdout and returns non-zero on failure. +# +# Args: +# $1 — account name +# $2 — "--adopt" or empty +# +# Returns: +# 0 if valid; 1 on empty name, invalid name format, or already registered. +_ckipper_account_add_validate_name() { + local name="$1" + if [[ -z "$name" ]]; then + echo "Usage: ckipper account add [--adopt]" >&2 + return 1 + fi + if [[ ! "$name" =~ ^[a-z0-9_-]+$ ]]; then + echo "Account name must match ^[a-z0-9_-]+$ (lowercase alphanumeric, underscore, hyphen)." >&2 + return 1 + fi + if jq -e --arg n "$name" '.accounts[$n]' "$CKIPPER_REGISTRY" >/dev/null 2>/dev/null; then + echo "Account '$name' is already registered." >&2 + return 1 + fi +} + +# Run the adopt flow: pick a Keychain entry and finalize registration for an existing dir. +# +# Args: +# $1 — account name +# $2 — account config directory +# +# Returns: +# 0 on success; 1 on validation or registration failure. +_ckipper_account_add_adopt_flow() { + local name="$1" dir="$2" + if [[ ! -d "$dir" ]]; then + echo "Cannot adopt: $dir does not exist." >&2 + return 1 + fi + local picked="" + if [[ "${_CKIPPER_TEST_OSTYPE:-$OSTYPE}" == darwin* ]]; then + picked=$(_ckipper_account_add_pick_keychain_entry "$name") || return 1 + fi + _CKIPPER_FINALIZE_CTX[name]="$name" + _CKIPPER_FINALIZE_CTX[dir]="$dir" + _CKIPPER_FINALIZE_CTX[service]="$picked" + _ckipper_account_finalize_registration "adopt" +} + +# Sentinel item shown alongside Keychain candidates so the picker has an +# explicit "skip" choice (consistent with _core_prompt_choose's no-empty-on-cancel +# semantics). +readonly _CKIPPER_ACCOUNT_KEYCHAIN_SKIP_LABEL="(skip — register without Keychain entry)" + +# Prompt the user to pick a Keychain entry from the available candidates. +# Echoes the chosen service name to stdout (or empty string when the user +# picked the skip sentinel or no candidates exist). zsh has no working +# `local -n` / `typeset -n`, so the contract is stdout-capture rather than +# nameref — the caller does `picked=$(_ckipper_account_add_pick_keychain_entry "$name")`. +# +# Args: +# $1 — account name (for the prompt label and error messages) +# +# Returns: +# 0 on success; 1 on keychain error or invalid service shape. +# +# Errors (stderr): +# "Invalid Keychain service shape: " — when the picked entry has +# an unexpected shape (caught by _core_keychain_validate). +_ckipper_account_add_pick_keychain_entry() { + local name="$1" + local candidates + candidates=$(_core_keychain_snapshot) || return 1 + [[ -z "$candidates" ]] && return 0 + local -a items + items=( ${(f)candidates} "$_CKIPPER_ACCOUNT_KEYCHAIN_SKIP_LABEL" ) + local picked + picked=$(_core_prompt_choose "Pick a Keychain entry for '$name'" "${items[@]}") + [[ -z "$picked" || "$picked" == "$_CKIPPER_ACCOUNT_KEYCHAIN_SKIP_LABEL" ]] && return 0 + if ! _core_keychain_validate "$picked"; then + echo "Invalid Keychain service shape: $picked" >&2 + return 1 + fi + echo "$picked" +} + +# Run the fresh registration flow: create the dir, deploy hooks, launch Claude, detect new keychain. +# +# Args: +# $1 — account name +# $2 — account config directory +# +# Returns: +# 0 on success; 1 on abort or credential detection failure. +_ckipper_account_add_fresh_flow() { + local name="$1" dir="$2" + if [[ -d "$dir" ]]; then + echo "Directory $dir already exists. Use --adopt to register it." >&2 + return 1 + fi + mkdir -p "$dir/hooks" + if [[ -f "$CKIPPER_DIR/settings-template.json" ]]; then + cp "$CKIPPER_DIR/settings-template.json" "$dir/settings.json" + fi + _ckipper_account_redeploy_hooks_for "$name" "$dir" + local before_snapshot + before_snapshot=$(_core_keychain_snapshot) || return 1 + _ckipper_account_add_launch_claude "$name" "$dir" || return 1 + local after_snapshot + after_snapshot=$(_core_keychain_snapshot) || return 1 + local new_service + new_service=$(comm -13 \ + <(printf '%s\n' "$before_snapshot") \ + <(printf '%s\n' "$after_snapshot") | head -1) + _ckipper_account_add_check_credentials "$name" "$dir" "$new_service" || return 1 + _CKIPPER_FINALIZE_CTX[name]="$name" + _CKIPPER_FINALIZE_CTX[dir]="$dir" + _CKIPPER_FINALIZE_CTX[service]="$new_service" + _ckipper_account_finalize_registration "fresh" +} + +# Display the fresh-add instructions, prompt for confirmation, and launch Claude. +# +# Args: +# $1 — account name +# $2 — account config directory +# +# Returns: +# 0 after Claude exits; 1 if user chose to skip. +_ckipper_account_add_launch_claude() { + local name="$1" dir="$2" + cat <&2 + echo "Refusing to register. Use --adopt to register manually." >&2 + return 1 + fi + echo "Detected new Keychain entry: $new_service" + return 0 + fi + if [[ -f "$dir/.credentials.json" ]]; then + echo "No new Keychain entry, but $dir/.credentials.json exists — proceeding with on-disk credentials." + return 0 + fi + echo "Warning: no new Keychain entry detected and no .credentials.json on disk." >&2 + echo "Login may not have completed. Re-run /login or use: ckipper account add $name --adopt" >&2 + return 1 +} + +# Register a new account interactively (fresh login) or by adopting an existing directory. +# +# Args: +# $1 — account name (must match ^[a-z0-9_-]+$) +# $2 — "--adopt" to register an existing directory; omit for fresh login flow +# +# Returns: +# 0 on success; 1 on validation or registration failure. +_ckipper_account_add() { + _core_registry_check_version || return 1 + local name="$1" should_adopt="false" + [[ "$2" == "--adopt" ]] && should_adopt="true" + _core_registry_init + _ckipper_account_add_validate_name "$name" || return 1 + local dir="$HOME/.claude-$name" + if [[ "$should_adopt" = "true" ]]; then + _ckipper_account_add_adopt_flow "$name" "$dir" + return $? + fi + _ckipper_account_add_fresh_flow "$name" "$dir" +} + +# Write the account entry to the registry and regenerate aliases atomically. +# On collision, diagnoses the cause and prints an appropriate error. +# Reads name, dir, and service from _CKIPPER_FINALIZE_CTX module global. +# +# Args: +# $1 — registration mode: "fresh" or "adopt" +# +# Returns: +# 0 on success; 1 on registry collision or write failure. +_ckipper_account_finalize_registration() { + local mode="$1" + local name="${_CKIPPER_FINALIZE_CTX[name]}" + local dir="${_CKIPPER_FINALIZE_CTX[dir]}" + local service="${_CKIPPER_FINALIZE_CTX[service]}" + local now; now=$(date -u +"%Y-%m-%dT%H:%M:%SZ") + local defaults; defaults=$(_core_registry_account_defaults_json) + _core_registry_init + if ! _core_registry_update ' + if (.accounts | has($n)) then + error("ALREADY_REGISTERED") + elif ([.accounts[].config_dir] | any(. == $d)) then + error("CONFIG_DIR_IN_USE") + else + .accounts[$n] = { + config_dir: $d, + keychain_service: (if $s == "" then null else $s end), + registered_at: $t, + preferences: $p + } + | (if .default == null then .default = $n else . end) + end + ' --arg n "$name" --arg d "$dir" --arg s "$service" --arg t "$now" --argjson p "$defaults"; then + _ckipper_account_finalize_diagnose_error "$name" "$dir" + return 1 + fi + _ckipper_account_finalize_announce "$name" "$mode" +} + +# Regenerate aliases, sync hooks, and print the post-registration usage hint. +# +# Args: +# $1 — account name +# $2 — registration mode label (e.g. "fresh", "adopt") +# +# Returns: 0 always. +_ckipper_account_finalize_announce() { + local name="$1" mode="$2" + _ckipper_account_regenerate_aliases + _ckipper_account_redeploy_hooks_for "$name" + echo "Registered '$name' (mode: $mode)." + if _ckipper_account_bare_alias_safe "$name"; then + echo "Use it via: claude-$name (or just: $name)" + else + echo "Use it via: claude-$name" + fi +} + +# Diagnose why _ckipper_account_finalize_registration failed and print the appropriate error. +# +# Args: +# $1 — account name +# $2 — account config directory +# +# Returns: +# 0 always (error message already printed to stderr). +_ckipper_account_finalize_diagnose_error() { + local name="$1" dir="$2" + if jq -e --arg n "$name" '.accounts[$n]' "$CKIPPER_REGISTRY" >/dev/null 2>&1; then + echo "Error: account '$name' already exists in registry (race detected)." >&2 + elif jq -e --arg d "$dir" '[.accounts[].config_dir] | any(. == $d)' "$CKIPPER_REGISTRY" >/dev/null 2>&1; then + echo "Error: config dir '$dir' is already claimed by another registered account." >&2 + else + echo "Error: failed to write account '$name' to registry $CKIPPER_REGISTRY" >&2 + fi +} + +# Return 0 if $1 is safe to use as a bare-alias function name (no clash with +# any existing PATH command, shell builtin, alias, or reserved word). Existing +# shell functions are not a clash — we expect to redefine those. +# +# Args: +# $1 — proposed alias name +# +# Returns: +# 0 if safe to use; 1 if it would shadow an existing command/builtin/alias/reserved word. +_ckipper_account_bare_alias_safe() { + local n="$1" + (( ${+commands[$n]} || ${+builtins[$n]} || ${+aliases[$n]} )) && return 1 + local what; what=$(whence -w "$n" 2>/dev/null | awk '{print $2}') + [[ "$what" == "reserved" ]] && return 1 + return 0 +} + +# Column widths (chars) used when rendering `ckipper account list` rows. +# Matched against the header printed by _ckipper_account_list_header. +readonly _CKIPPER_ACCOUNT_LIST_COL_NAME=14 +readonly _CKIPPER_ACCOUNT_LIST_COL_DIR=32 +readonly _CKIPPER_ACCOUNT_LIST_COL_EMAIL=28 +readonly _CKIPPER_ACCOUNT_LIST_COL_KEYCHAIN=10 + +# Print the column-header row for `ckipper account list`. +# +# Returns: 0 always. +_ckipper_account_list_header() { + printf '%-*s%-*s%-*s%-*s%s\n' \ + "$_CKIPPER_ACCOUNT_LIST_COL_NAME" "NAME" \ + "$_CKIPPER_ACCOUNT_LIST_COL_DIR" "DIR" \ + "$_CKIPPER_ACCOUNT_LIST_COL_EMAIL" "EMAIL" \ + "$_CKIPPER_ACCOUNT_LIST_COL_KEYCHAIN" "KEYCHAIN" \ + "DEFAULT" +} + +# Print all registered accounts with their directories and email addresses. +# +# Returns: +# 0 always. +_ckipper_account_list() { + if [[ ! -f "$CKIPPER_REGISTRY" ]]; then + echo "No accounts registered. Run: ckipper account add " + return 0 + fi + _core_registry_check_version || return 1 + _CKIPPER_ACCOUNT_LIST_DEFAULT=$(jq -r '.default // ""' "$CKIPPER_REGISTRY") + _core_style_header "Registered accounts" + _ckipper_account_list_header + _core_style_divider + jq -r '.accounts | to_entries[] | "\(.key)\t\(.value.config_dir)\t\(.value.keychain_service // "null")"' "$CKIPPER_REGISTRY" | \ + while IFS=$'\t' read -r name dir keychain; do + _ckipper_account_list_row "$name" "$dir" "$keychain" + done + echo "" + echo "* = default. Run: ckipper account default " + echo "" + echo "Tip: don't run the same account in two terminals at once — Claude's OAuth refresh" + echo "is single-use, so the second session gets logged out. Use a different account instead." +} + +# Shorten an absolute path under $HOME to a `~/`-prefixed form for display. +# +# Args: $1 — absolute path. +# Returns: 0 always; prints the (possibly shortened) path. +_ckipper_account_list_short_dir() { + local dir="$1" + [[ "$dir" == "$HOME"* ]] && printf '~%s' "${dir#$HOME}" || printf '%s' "$dir" +} + +# Print a single account row for `ckipper account list`. +# +# Args: +# $1 — account name +# $2 — config directory +# $3 — keychain service ("null" string when unset) +# +# Reads `_CKIPPER_ACCOUNT_LIST_DEFAULT` (set by `_ckipper_account_list`) to +# decide whether to mark this row as the default. Threading default through +# the registry-stream pipeline as a 4th positional would break the +# 3-parameter cap. +# +# Returns: +# 0 always. +_ckipper_account_list_row() { + local name="$1" dir="$2" keychain="$3" + local default="$_CKIPPER_ACCOUNT_LIST_DEFAULT" + local short_dir; short_dir=$(_ckipper_account_list_short_dir "$dir") + local email="-" + if [[ -f "$dir/.claude.json" ]]; then + email=$(jq -r '.oauthAccount.emailAddress // "-"' "$dir/.claude.json" 2>/dev/null) + fi + local keychain_status="no" + [[ "$keychain" == "null" ]] && keychain_status="null" + [[ "$keychain" != "null" && -n "$keychain" ]] && keychain_status="yes" + local marker=" " + [[ "$name" == "$default" ]] && marker=$(_core_style_color green "*") + printf '%-*s%-*s%-*s%-*s%s\n' \ + "$_CKIPPER_ACCOUNT_LIST_COL_NAME" "$name" \ + "$_CKIPPER_ACCOUNT_LIST_COL_DIR" "$short_dir" \ + "$_CKIPPER_ACCOUNT_LIST_COL_EMAIL" "$email" \ + "$_CKIPPER_ACCOUNT_LIST_COL_KEYCHAIN" "$keychain_status" \ + "$marker" +} + +# Set the default account in the registry. +# +# Args: +# $1 — account name to set as default +# +# Returns: +# 0 on success; 1 if account is not registered. +_ckipper_account_default() { + _core_registry_check_version || return 1 + local name="$1" + [[ -z "$name" ]] && { echo "Usage: ckipper account default " >&2; return 1; } + if ! jq -e --arg n "$name" '.accounts[$n]' "$CKIPPER_REGISTRY" >/dev/null; then + echo "Account '$name' is not registered." >&2 + return 1 + fi + if ! _core_registry_update '.default = $n' --arg n "$name"; then + echo "Error: failed to set default account in registry." >&2 + return 1 + fi + echo "Default account is now '$name'." +} + +# Unregister an account, then prompt to delete its config dir and Keychain +# entry via _ckipper_account_cleanup_*. Declining a prompt keeps the +# file/entry and prints the manual cleanup command. Refuses to operate while +# any Claude process is running (mirrors `account rename`) because the +# subsequent `rm -rf` of the config dir would yank state out from under a +# live session. +# +# Args: +# $1 — account name to remove +# +# Returns: +# 0 on success; 1 if account is not registered or Claude is running. +_ckipper_account_remove() { + _core_registry_check_version || return 1 + local name="$1" + [[ -z "$name" ]] && { echo "Usage: ckipper account remove " >&2; return 1; } + if ! jq -e --arg n "$name" '.accounts[$n]' "$CKIPPER_REGISTRY" >/dev/null; then + echo "Account '$name' is not registered." >&2 + return 1 + fi + _core_assert_no_running_claude || return 1 + local dir; dir=$(jq -r --arg n "$name" '.accounts[$n].config_dir' "$CKIPPER_REGISTRY") + local service; service=$(jq -r --arg n "$name" '.accounts[$n].keychain_service // ""' "$CKIPPER_REGISTRY") + if ! _core_registry_update 'del(.accounts[$n]) | (if .default == $n then .default = null else . end)' --arg n "$name"; then + echo "Error: failed to unregister '$name' from the registry. Skipping cleanup of '$dir'." >&2 + return 1 + fi + # Drop the now-stale launcher functions from the calling shell. + unset -f "claude-$name" 2>/dev/null + unset -f "$name" 2>/dev/null + _ckipper_account_regenerate_aliases + echo "Unregistered '$name'." + _ckipper_account_cleanup_dir "$name" "$dir" + _ckipper_account_cleanup_keychain "$name" "$service" +} + +# Validate arguments for `ckipper account rename` before performing the rename. +# +# Args: +# $1 — old account name +# $2 — new account name +# +# Returns: +# 0 if valid; 1 on any validation failure. +_ckipper_account_rename_validate() { + local old="$1" new="$2" + if [[ -z "$old" || -z "$new" ]]; then + echo "Usage: ckipper account rename " >&2 + return 1 + fi + if [[ ! "$new" =~ ^[a-z0-9_-]+$ ]]; then + echo "New name must match ^[a-z0-9_-]+$ (lowercase alphanumeric, underscore, hyphen)." >&2 + return 1 + fi + if [[ "$old" == "$new" ]]; then + echo "Old and new name are the same. Nothing to do." >&2 + return 1 + fi + if ! jq -e --arg n "$old" '.accounts[$n]' "$CKIPPER_REGISTRY" >/dev/null 2>&1; then + echo "Account '$old' is not registered." >&2 + return 1 + fi + if jq -e --arg n "$new" '.accounts[$n]' "$CKIPPER_REGISTRY" >/dev/null 2>&1; then + echo "Account '$new' is already registered." >&2 + return 1 + fi +} + +# Verify the rename is safe before performing destructive actions. +# +# Args: +# $1 — new directory path (must not exist) +# $2 — old directory path (must be a directory) +# +# Returns: 0 if preconditions pass; 1 if any check fails. +_ckipper_account_rename_check_preconditions() { + local new_dir="$1" old_dir="$2" + if [[ -e "$new_dir" ]]; then + echo "Error: $new_dir already exists. Pick a different name or remove it first." >&2 + return 1 + fi + if [[ ! -d "$old_dir" ]]; then + echo "Error: source directory $old_dir does not exist." >&2 + return 1 + fi + _core_assert_no_running_claude || return 1 +} + +# Perform the directory move and registry update for `ckipper account rename`. +# Rolls back the directory rename if the registry write fails. +# Reads old_dir and new_dir from _CKIPPER_RENAME_CTX module global. +# +# Args: +# $1 — old account name +# $2 — new account name +# +# Returns: +# 0 on success; 1 on directory move or registry write failure. +_ckipper_account_rename_perform() { + local old="$1" new="$2" + local old_dir="${_CKIPPER_RENAME_CTX[old_dir]}" + local new_dir="${_CKIPPER_RENAME_CTX[new_dir]}" + _ckipper_account_rename_check_preconditions "$new_dir" "$old_dir" || return 1 + if ! mv "$old_dir" "$new_dir" 2>/dev/null; then + echo "Error: failed to rename $old_dir → $new_dir." >&2 + return 1 + fi + if ! _core_registry_update ' + .accounts[$new] = .accounts[$old] | + .accounts[$new].config_dir = $newdir | + del(.accounts[$old]) | + (if .default == $old then .default = $new else . end) + ' --arg old "$old" --arg new "$new" --arg newdir "$new_dir"; then + mv "$new_dir" "$old_dir" 2>/dev/null + echo "Error: registry write failed; reverted directory rename." >&2 + return 1 + fi +} + +# Rename a registered account: moves its directory and updates the registry. +# +# Args: +# $1 — old account name +# $2 — new account name +# +# Returns: +# 0 on success; 1 on validation or rename failure. +_ckipper_account_rename() { + _core_registry_check_version || return 1 + local old="$1" new="$2" + _ckipper_account_rename_validate "$old" "$new" || return 1 + local old_dir new_dir + old_dir=$(jq -r --arg n "$old" '.accounts[$n].config_dir' "$CKIPPER_REGISTRY") + new_dir="$HOME/.claude-$new" + _CKIPPER_RENAME_CTX[old_dir]="$old_dir" + _CKIPPER_RENAME_CTX[new_dir]="$new_dir" + _ckipper_account_rename_perform "$old" "$new" || return 1 + # Drop old-name launcher functions from the calling shell. + unset -f "claude-$old" 2>/dev/null + unset -f "$old" 2>/dev/null + _ckipper_account_regenerate_aliases + _ckipper_account_redeploy_hooks_for "$new" # rewrite per-account settings.json hook paths to new dir + echo "Renamed '$old' → '$new'." + echo "Directory: $old_dir → $new_dir" + if _ckipper_account_bare_alias_safe "$new"; then + echo "Use: claude-$new (or just: $new)" + else + echo "Use: claude-$new" + fi +} diff --git a/lib/account/account-management_test.bats b/lib/account/account-management_test.bats new file mode 100644 index 0000000..ca9c7d9 --- /dev/null +++ b/lib/account/account-management_test.bats @@ -0,0 +1,355 @@ +#!/usr/bin/env bats +# Unit tests for lib/account/account-management.zsh helpers. +# Sources ckipper.zsh (which wires up all lib/core/ + lib/account/ modules). + +load "${BATS_TEST_DIRNAME}/../../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env +} + +teardown() { + teardown_isolated_env +} + +# Helper: run a zsh expression with the full ckipper environment sourced. +run_helper() { + run env \ + HOME="$TMP_HOME" \ + CKIPPER_DIR="$CKIPPER_DIR" \ + CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + PATH="$PATH" \ + _CKIPPER_TEST_OSTYPE="linux" \ + CKIPPER_FORCE=1 \ + zsh -c "source \"$REPO_ROOT/ckipper.zsh\"; $*" +} + +# ── _ckipper_account_add_validate_name ─────────────────────────────────────── + +@test "validate_name accepts valid lowercase-alphanumeric names" { + echo '{"version":1,"default":null,"accounts":{}}' > "$CKIPPER_REGISTRY" + + run_helper '_ckipper_account_add_validate_name "myaccount"' + + [ "$status" -eq 0 ] +} + +@test "validate_name accepts names with hyphens and underscores" { + echo '{"version":1,"default":null,"accounts":{}}' > "$CKIPPER_REGISTRY" + + run_helper '_ckipper_account_add_validate_name "my-account_1"' + + [ "$status" -eq 0 ] +} + +@test "validate_name rejects an empty name" { + echo '{"version":1,"default":null,"accounts":{}}' > "$CKIPPER_REGISTRY" + + run_helper '_ckipper_account_add_validate_name ""' + + [ "$status" -ne 0 ] + [[ "$output" =~ [Uu]sage ]] +} + +@test "validate_name rejects names with uppercase letters" { + echo '{"version":1,"default":null,"accounts":{}}' > "$CKIPPER_REGISTRY" + + run_helper '_ckipper_account_add_validate_name "MyAccount"' + + [ "$status" -ne 0 ] + [[ "$output" =~ "must match" ]] +} + +@test "validate_name rejects names with spaces" { + echo '{"version":1,"default":null,"accounts":{}}' > "$CKIPPER_REGISTRY" + + run_helper '_ckipper_account_add_validate_name "my account"' + + [ "$status" -ne 0 ] + [[ "$output" =~ "must match" ]] +} + +@test "validate_name rejects a name already registered" { + echo '{"version":1,"default":"work","accounts":{"work":{"config_dir":"/tmp/.claude-work","keychain_service":null}}}' > "$CKIPPER_REGISTRY" + + run_helper '_ckipper_account_add_validate_name "work"' + + [ "$status" -ne 0 ] + [[ "$output" =~ "already registered" ]] +} + +# ── _ckipper_account_finalize_registration ─────────────────────────────────── + +@test "account add stores preferences with safe defaults" { + echo '{"version":2,"default":null,"accounts":{}}' > "$CKIPPER_REGISTRY" + + run_helper '_CKIPPER_FINALIZE_CTX[name]="work"; _CKIPPER_FINALIZE_CTX[dir]="/tmp/.claude-work"; _CKIPPER_FINALIZE_CTX[service]=""; _ckipper_account_finalize_registration "adopt"' + + [ "$status" -eq 0 ] + local always_docker always_firewall ssh_forward + always_docker=$(jq -r '.accounts.work.preferences.always_docker' "$CKIPPER_REGISTRY") + always_firewall=$(jq -r '.accounts.work.preferences.always_firewall' "$CKIPPER_REGISTRY") + ssh_forward=$(jq -r '.accounts.work.preferences.ssh_forward' "$CKIPPER_REGISTRY") + [ "$always_docker" = "false" ] + [ "$always_firewall" = "false" ] + [ "$ssh_forward" = "true" ] +} + +# ── _ckipper_account_bare_alias_safe ───────────────────────────────────────── + +@test "bare_alias_safe returns 1 for shell builtin 'cd'" { + # 'cd' is a zsh builtin — using it as a bare alias would shadow it. + run_helper '_ckipper_account_bare_alias_safe "cd" && echo SAFE || echo UNSAFE' + + [[ "$output" =~ "UNSAFE" ]] +} + +@test "bare_alias_safe returns 0 for an invented name that cannot shadow anything" { + # A random name with no PATH binary, no builtin, no alias. + run_helper '_ckipper_account_bare_alias_safe "xyzzy_no_clash_9q7" && echo SAFE || echo UNSAFE' + + [[ "$output" =~ "SAFE" ]] +} + +# ── _ckipper_account_list ──────────────────────────────────────────────────── + +@test "list shows 'No accounts' message when registry is missing" { + rm -f "$CKIPPER_REGISTRY" + + run_helper '_ckipper_account_list' + + [ "$status" -eq 0 ] + [[ "$output" =~ "No accounts" ]] +} + +@test "list shows registered account name" { + echo '{"version":1,"default":"work","accounts":{"work":{"config_dir":"/tmp/.claude-work","keychain_service":null}}}' > "$CKIPPER_REGISTRY" + + run_helper '_ckipper_account_list' + + [ "$status" -eq 0 ] + [[ "$output" =~ "work" ]] +} + +@test "list marks the default account with an asterisk" { + echo '{"version":1,"default":"work","accounts":{"work":{"config_dir":"/tmp/.claude-work","keychain_service":null}}}' > "$CKIPPER_REGISTRY" + + run_helper '_ckipper_account_list' + + [ "$status" -eq 0 ] + # Default marker is rendered in the trailing DEFAULT column (post-restyle). + # Require `work` then non-newline padding then `*` on the SAME line. + # [[:blank:]] is space/tab only (no \n) and [^[:cntrl:]] excludes \n, so + # the legend line `* = default ...` cannot satisfy this regex via + # cross-line matching. + [[ "$output" =~ work[[:blank:]]+[^[:cntrl:]]*\* ]] + [[ "$output" =~ "* = default" ]] +} + +# ── _ckipper_account_default ────────────────────────────────────────────────── + +@test "default sets the default account in the registry" { + echo '{"version":1,"default":null,"accounts":{"work":{"config_dir":"/tmp/.claude-work","keychain_service":null}}}' > "$CKIPPER_REGISTRY" + + run_helper '_ckipper_account_default "work"' + + [ "$status" -eq 0 ] + [[ "$output" =~ "work" ]] + local val; val=$(jq -r '.default' "$CKIPPER_REGISTRY") + [ "$val" = "work" ] +} + +@test "default fails when account is not registered" { + echo '{"version":1,"default":null,"accounts":{}}' > "$CKIPPER_REGISTRY" + + run_helper '_ckipper_account_default "nobody"' + + [ "$status" -ne 0 ] + [[ "$output" =~ "not registered" ]] +} + +# ── _ckipper_account_remove ─────────────────────────────────────────────────── + +@test "remove unregisters a known account and exits 0" { + # Use $TMP_HOME-relative dir so the cleanup helpers find no directory and + # don't prompt — this test only asserts the unregistration outcome. + printf '{"version":1,"default":null,"accounts":{"tmp":{"config_dir":"%s/.claude-tmp","keychain_service":null}}}\n' \ + "$TMP_HOME" > "$CKIPPER_REGISTRY" + + run_helper '_ckipper_account_remove "tmp"' + + [ "$status" -eq 0 ] + [[ "$output" =~ "Unregistered" ]] +} + +@test "remove fails for an account that is not registered" { + echo '{"version":1,"default":null,"accounts":{}}' > "$CKIPPER_REGISTRY" + + run_helper '_ckipper_account_remove "nobody"' + + [ "$status" -ne 0 ] + [[ "$output" =~ "not registered" ]] +} + +# ── _ckipper_account_rename_validate ────────────────────────────────────────── + +@test "rename_validate rejects an empty old name" { + echo '{"version":1,"default":null,"accounts":{}}' > "$CKIPPER_REGISTRY" + + run_helper '_ckipper_account_rename_validate "" "newname"' + + [ "$status" -ne 0 ] + [[ "$output" =~ [Uu]sage ]] +} + +@test "rename_validate rejects a new name with uppercase letters" { + echo '{"version":1,"default":null,"accounts":{"old":{"config_dir":"/tmp/.claude-old","keychain_service":null}}}' > "$CKIPPER_REGISTRY" + + run_helper '_ckipper_account_rename_validate "old" "NewName"' + + [ "$status" -ne 0 ] + [[ "$output" =~ "must match" ]] +} + +@test "rename_validate rejects rename when old name is not registered" { + echo '{"version":1,"default":null,"accounts":{}}' > "$CKIPPER_REGISTRY" + + run_helper '_ckipper_account_rename_validate "ghost" "newname"' + + [ "$status" -ne 0 ] + [[ "$output" =~ "not registered" ]] +} + +# ── _ckipper_account_add_pick_keychain_entry ───────────────────────────────── +# Regression: zsh has no working `local -n` / `typeset -n`, so the previous +# nameref-style implementation silently leaked the picked value to a global +# named `_picked_ref` and the caller's variable stayed empty. The contract is +# now stdout-capture: the function echoes the picked service to stdout (or +# nothing on skip / no candidates). + +@test "pick_keychain_entry echoes the picked service to stdout" { + run_helper ' + _core_keychain_snapshot() { printf "Claude Code-credentials\nClaude Code-credentials-personal\n"; } + _core_prompt_choose() { echo "Claude Code-credentials-personal"; } + _core_keychain_validate() { return 0; } + _ckipper_account_add_pick_keychain_entry myaccount + ' + + [ "$status" -eq 0 ] + [ "$output" = "Claude Code-credentials-personal" ] +} + +@test "pick_keychain_entry emits nothing on skip selection" { + run_helper ' + _core_keychain_snapshot() { printf "Claude Code-credentials\n"; } + _core_prompt_choose() { echo "$_CKIPPER_ACCOUNT_KEYCHAIN_SKIP_LABEL"; } + _ckipper_account_add_pick_keychain_entry myaccount + ' + + [ "$status" -eq 0 ] + [ -z "$output" ] +} + +@test "pick_keychain_entry emits nothing when keychain_snapshot returns no candidates" { + run_helper ' + _core_keychain_snapshot() { :; } + _ckipper_account_add_pick_keychain_entry myaccount + ' + + [ "$status" -eq 0 ] + [ -z "$output" ] +} + +@test "pick_keychain_entry returns 1 with stderr error when picked service has bad shape" { + run_helper ' + _core_keychain_snapshot() { echo "bogus-service"; } + _core_prompt_choose() { echo "bogus-service"; } + _core_keychain_validate() { return 1; } + _ckipper_account_add_pick_keychain_entry myaccount + ' + + [ "$status" -ne 0 ] + [[ "$output" =~ "Invalid Keychain service shape" ]] +} + +# ── _ckipper_account_remove ────────────────────────────────────────────────── +# Regression: `account remove` performed `rm -rf` on the account's config dir +# (via _ckipper_account_cleanup_dir) without first asserting that no Claude +# process was running. A user with Claude open in another terminal could lose +# their live session's config dir. `account rename` already had the guard; +# `remove` is strictly more destructive and now mirrors it. + +@test "account remove refuses to act when a Claude process is running" { + local dir="$TMP_HOME/.claude-work" + mkdir -p "$dir" + cat > "$CKIPPER_REGISTRY" < "$CKIPPER_REGISTRY" <<'JSON' +{"version":2,"default":null,"accounts":{"work":{"config_dir":"/tmp/.claude-work","keychain_service":null,"registered_at":"t","preferences":{}}}} +JSON + + run_helper ' + _core_registry_update() { return 1; } + _ckipper_account_default work + ' + + [ "$status" -ne 0 ] + [[ "$output" =~ "Error" ]] + [[ ! "$output" =~ "Default account is now" ]] +} + +@test "account remove surfaces registry-update failure and skips destructive cleanup" { + local dir="$TMP_HOME/.claude-work" + mkdir -p "$dir" + cat > "$CKIPPER_REGISTRY" <` function and, if safe, a bare `` shortcut. +# +# Args: +# $1 — account name +# $2 — account config directory path +# +# Returns: +# 0 always. +_ckipper_account_generate_account_launcher_function() { + local account_name="$1" account_dir="$2" + echo "claude-$account_name() { CLAUDE_CONFIG_DIR=\"$account_dir\" command claude \"\$@\"; }" + # Bare-name shortcut: also generate `` so users can type the + # account name directly. Skip if it would shadow a real binary, builtin, + # alias, or reserved word. + if _ckipper_account_bare_alias_safe "$account_name"; then + echo "$account_name() { CLAUDE_CONFIG_DIR=\"$account_dir\" command claude \"\$@\"; }" + else + echo "# Bare-name alias '$account_name' skipped (would shadow existing command)." + fi +} + +# Write the bare-claude guard function to stdout. +# Blocks bare `claude` invocations when accounts are registered to prevent +# credential overwrites on the default account's keychain entry. +# +# Returns: +# 0 always. +_ckipper_account_write_bare_claude_guard() { + echo "claude() {" + echo " if [[ -f \"\$_CKIPPER_REGISTRY\" ]] && jq -e '.accounts | length > 0' \"\$_CKIPPER_REGISTRY\" >/dev/null 2>&1; then" + echo " local default" + echo " default=\$(jq -r '.default // \"\"' \"\$_CKIPPER_REGISTRY\" 2>/dev/null)" + echo " echo \"Refusing to launch bare 'claude' — Ckipper has registered accounts.\" >&2" + echo " echo \"\" >&2" + echo " echo \"Bare 'claude' uses ~/.claude/ and writes to the Keychain entry your\" >&2" + echo " echo \"default account ('\${default:-personal}') is registered against. A fresh\" >&2" + echo " echo \"/login here would silently overwrite those credentials.\" >&2" + echo " echo \"\" >&2" + echo " if [[ -n \"\$default\" ]]; then" + echo " echo \"Use: claude-\$default\" >&2" + echo " else" + echo " echo \"Set a default first: ckipper account default , then use claude-.\" >&2" + echo " fi" + echo " echo \"\" >&2" + echo " echo \"To bypass (fresh login on purpose): command claude \\\$@\" >&2" + echo " return 1" + echo " fi" + echo " command claude \"\$@\"" + echo "}" +} + +# Regenerate ~/.ckipper/aliases.zsh from the current registry and re-source it +# in the calling shell so newly-registered accounts are usable immediately. +# +# Returns: +# 0 always. +_ckipper_account_regenerate_aliases() { + local out="$CKIPPER_DIR/aliases.zsh" + local account_name account_dir + { + echo "# Auto-generated by ckipper. Do not edit by hand." + echo "# Self-contained: does not depend on ckipper.zsh being sourced." + echo "# Regenerated whenever an account is added or removed." + echo "" + echo "_CKIPPER_REGISTRY=\"\${CKIPPER_DIR:-\$HOME/.ckipper}/accounts.json\"" + echo "" + _ckipper_account_write_bare_claude_guard + echo "" + if [[ -f "$CKIPPER_REGISTRY" ]]; then + jq -r '.accounts | to_entries[] | "\(.key)\t\(.value.config_dir)"' "$CKIPPER_REGISTRY" | \ + while IFS=$'\t' read -r account_name account_dir; do + _ckipper_account_generate_account_launcher_function "$account_name" "$account_dir" + done + fi + } > "$out.tmp" + # Atomic install — readers in other shells never see a partial file. + mv "$out.tmp" "$out" + chmod "$_CKIPPER_ACCOUNT_ALIASES_FILE_PERMS" "$out" + # Re-source in the calling shell so newly-registered accounts are usable + # immediately without the user having to `exec zsh`. + source "$out" +} + +# Rewrite settings.json hook paths to absolute paths for the given account directory. +# Consumes any prefix matching $HOME/.claude*/hooks/ or $HOME/.ckipper/hooks/. +# +# Args: +# $1 — account config directory path +# +# Returns: +# 0 always. +_ckipper_account_rewrite_settings_json_hooks() { + local dir="$1" + [[ -f "$dir/settings.json" ]] || return 0 + command -v jq &>/dev/null || return 0 + local settings_tmpfile; settings_tmpfile=$(mktemp "$dir/.settings.tmp.XXXXXX") + jq --arg d "$dir" ' + (.hooks // {}) as $h | + .hooks = ($h | walk( + if type == "string" and test("\\$HOME/(\\.claude(-[a-z0-9_-]+)?|\\.ckipper)/hooks/") + then sub("\\$HOME/(\\.claude(-[a-z0-9_-]+)?|\\.ckipper)/hooks/"; "\($d)/hooks/") + else . end + )) + ' "$dir/settings.json" > "$settings_tmpfile" && mv "$settings_tmpfile" "$dir/settings.json" +} + +# Redeploy ckipper-managed safety hooks from $CKIPPER_DIR/hooks/ into one +# account dir and rewrite that account's settings.json hook paths to absolute +# destination paths. This is install→one redeploy, not peer-to-peer sync. +# +# Allows callers (e.g. _ckipper_account_add) to pass the dir directly before +# the account is registered in the registry. +# +# Args: +# $1 — account name +# $2 — (optional) account config directory; looked up in registry if omitted +# +# Returns: +# 0 on success; 1 if directory cannot be resolved. +_ckipper_account_redeploy_hooks_for() { + local name="$1" dir="${2:-}" + if [[ -z "$dir" ]]; then + _core_registry_check_version || return 1 + dir=$(jq -r --arg n "$name" '.accounts[$n].config_dir' "$CKIPPER_REGISTRY") + [[ -z "$dir" || "$dir" == "null" ]] && return 1 + fi + mkdir -p "$dir/hooks" + cp -a "$CKIPPER_DIR/hooks/." "$dir/hooks/" 2>/dev/null || true + _ckipper_account_rewrite_settings_json_hooks "$dir" +} + +# Redeploy ckipper-managed safety hooks into every registered account dir. +# +# This is install→all redeploy (NOT peer-to-peer sync). Run after editing +# anything under $CKIPPER_DIR/hooks/ so the change propagates everywhere. +# For peer-to-peer sync of user-written hooks, see `ckipper account sync +# --include hooks`. +# +# Returns: +# 0 on success; 1 if registry version check fails. +_ckipper_account_redeploy_hooks() { + if [[ ! -f "$CKIPPER_REGISTRY" ]]; then + echo "No accounts registered." + return 0 + fi + _core_registry_check_version || return 1 + local names; names=$(jq -r '.accounts | keys[]' "$CKIPPER_REGISTRY") + while IFS= read -r name; do + echo "Redeploying install hooks → $name" + _ckipper_account_redeploy_hooks_for "$name" + done <<< "$names" +} diff --git a/lib/account/aliases_test.bats b/lib/account/aliases_test.bats new file mode 100644 index 0000000..67f8d01 --- /dev/null +++ b/lib/account/aliases_test.bats @@ -0,0 +1,109 @@ +#!/usr/bin/env bats +# Unit tests for lib/account/aliases.zsh helpers. +# Sources ckipper.zsh (which wires up all lib/core/ + lib/account/ modules). + +load "${BATS_TEST_DIRNAME}/../../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env +} + +teardown() { + teardown_isolated_env +} + +# Helper: run a zsh expression with the full ckipper environment sourced. +run_helper() { + run env \ + HOME="$TMP_HOME" \ + CKIPPER_DIR="$CKIPPER_DIR" \ + CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + PATH="$PATH" \ + _CKIPPER_TEST_OSTYPE="linux" \ + CKIPPER_FORCE=1 \ + zsh -c "source \"$REPO_ROOT/ckipper.zsh\"; $*" +} + +# ── _ckipper_account_regenerate_aliases ─────────────────────────────────────── + +@test "regenerate_aliases creates aliases.zsh with mode 644" { + echo '{"version":1,"default":null,"accounts":{}}' > "$CKIPPER_REGISTRY" + + run_helper '_ckipper_account_regenerate_aliases' + + local out="$CKIPPER_DIR/aliases.zsh" + assert_file_exists "$out" + assert_file_mode "$out" "644" +} + +@test "regenerate_aliases includes a launcher function for each registered account" { + echo '{"version":1,"default":"dev","accounts":{"dev":{"config_dir":"/tmp/.claude-dev","keychain_service":null}}}' > "$CKIPPER_REGISTRY" + + run_helper '_ckipper_account_regenerate_aliases' + + local out="$CKIPPER_DIR/aliases.zsh" + assert_file_exists "$out" + grep -q "claude-dev()" "$out" +} + +# ── _ckipper_account_generate_account_launcher_function ─────────────────────── + +@test "generate_account_launcher_function emits a claude- function" { + run_helper '_ckipper_account_generate_account_launcher_function "work" "/tmp/.claude-work"' + + [ "$status" -eq 0 ] + [[ "$output" =~ "claude-work()" ]] +} + +@test "generate_account_launcher_function sets CLAUDE_CONFIG_DIR in the emitted body" { + run_helper '_ckipper_account_generate_account_launcher_function "work" "/tmp/.claude-work"' + + [ "$status" -eq 0 ] + [[ "$output" =~ "CLAUDE_CONFIG_DIR" ]] + [[ "$output" =~ "/tmp/.claude-work" ]] +} + +# ── _ckipper_account_redeploy_hooks_for ────────────────────────────────────────── + +@test "redeploy_hooks_for copies hooks into the account directory" { + echo '{"version":1,"default":"dev","accounts":{"dev":{"config_dir":"'"$TMP_HOME"'/.claude-dev","keychain_service":null}}}' > "$CKIPPER_REGISTRY" + mkdir -p "$TMP_HOME/.claude-dev" + # Seed the shared hooks directory with a test hook. + mkdir -p "$CKIPPER_DIR/hooks" + echo "#!/bin/sh" > "$CKIPPER_DIR/hooks/test-hook.sh" + + run_helper '_ckipper_account_redeploy_hooks_for "dev"' + + [ "$status" -eq 0 ] + [ -f "$TMP_HOME/.claude-dev/hooks/test-hook.sh" ] +} + +@test "redeploy_hooks_for rewrites dollar-HOME hook paths in settings.json" { + echo '{"version":1,"default":"dev","accounts":{"dev":{"config_dir":"'"$TMP_HOME"'/.claude-dev","keychain_service":null}}}' > "$CKIPPER_REGISTRY" + mkdir -p "$TMP_HOME/.claude-dev/hooks" + # settings.json has a literal $HOME placeholder in a hook path. + printf '{"hooks":{"PreToolUse":[{"matcher":"*","hooks":[{"type":"command","command":"$HOME/.ckipper/hooks/pre.sh"}]}]}}' \ + > "$TMP_HOME/.claude-dev/settings.json" + + run_helper '_ckipper_account_redeploy_hooks_for "dev"' + + # After rewriting, the path should point to the account's hooks dir. + grep -q "$TMP_HOME/.claude-dev/hooks/pre.sh" "$TMP_HOME/.claude-dev/settings.json" +} + +# ── _ckipper_account_redeploy_hooks ─────────────────────────────────────────── + +@test "redeploy_hooks iterates all registered accounts and copies hooks to each" { + local dir_a="$TMP_HOME/.claude-alpha" + local dir_b="$TMP_HOME/.claude-beta" + mkdir -p "$dir_a" "$dir_b" + echo '{"version":1,"default":"alpha","accounts":{"alpha":{"config_dir":"'"$dir_a"'","keychain_service":null},"beta":{"config_dir":"'"$dir_b"'","keychain_service":null}}}' > "$CKIPPER_REGISTRY" + mkdir -p "$CKIPPER_DIR/hooks" + echo "#!/bin/sh" > "$CKIPPER_DIR/hooks/shared-hook.sh" + + run_helper '_ckipper_account_redeploy_hooks' + + [ "$status" -eq 0 ] + [ -f "$dir_a/hooks/shared-hook.sh" ] + [ -f "$dir_b/hooks/shared-hook.sh" ] +} diff --git a/lib/account/cleanup.zsh b/lib/account/cleanup.zsh new file mode 100644 index 0000000..08bfe80 --- /dev/null +++ b/lib/account/cleanup.zsh @@ -0,0 +1,54 @@ +#!/usr/bin/env zsh +# Interactive cleanup helpers for `ckipper account remove`. +# +# After unregistering an account, prompts the user to delete the config dir +# and the macOS Keychain entry. Both prompts honor CKIPPER_NO_GUM via the +# shared _core_prompt_confirm helper. +# +# Depends on lib/core/prompt.zsh (`_core_prompt_confirm`). + +# Prompt the user to delete the account's config directory, and remove it on +# confirmation. No-op when the directory does not exist. +# +# Args: +# $1 — account name (used in the friendly log line on success) +# $2 — absolute path to the config directory +# +# Returns: 0 always; non-zero only if the underlying `rm -rf` failed. +_ckipper_account_cleanup_dir() { + local name="$1" dir="$2" + [[ ! -d "$dir" ]] && return 0 + if _core_prompt_confirm "Delete config dir $dir?"; then + rm -rf "$dir" || return 1 + echo "Deleted config dir for '$name' ($dir)." + return 0 + fi + echo "Kept config dir for '$name'. To remove it manually:" + printf " rm -rf %q\n" "$dir" +} + +# Prompt the user to delete the account's macOS Keychain entry, and remove +# it on confirmation. No-op when the service is empty or the host is not +# darwin (Keychain is macOS-only). On confirmation, surfaces a Failed line +# (with manual retry command) when the underlying `security` call fails. +# +# Args: +# $1 — account name (used in the friendly log line on success) +# $2 — Keychain service name (e.g. "Claude Code-credentials-") +# +# Returns: 0 always. +_ckipper_account_cleanup_keychain() { + local name="$1" service="$2" + [[ -z "$service" || "${_CKIPPER_TEST_OSTYPE:-$OSTYPE}" != darwin* ]] && return 0 + if ! _core_prompt_confirm "Delete Keychain entry '$service'?"; then + echo "Kept Keychain entry for '$name'. To remove it manually:" + printf " security delete-generic-password -s %q\n" "$service" + return 0 + fi + if security delete-generic-password -s "$service" >/dev/null 2>&1; then + echo "Deleted Keychain entry '$service' for '$name'." + return 0 + fi + echo "Failed to delete Keychain entry '$service' for '$name'. To retry manually:" + printf " security delete-generic-password -s %q\n" "$service" +} diff --git a/lib/account/cleanup_test.bats b/lib/account/cleanup_test.bats new file mode 100644 index 0000000..d4d2bce --- /dev/null +++ b/lib/account/cleanup_test.bats @@ -0,0 +1,101 @@ +#!/usr/bin/env bats +# Module-level tests for lib/account/cleanup.zsh. +# cleanup.zsh is zsh-only (uses _core_prompt_confirm), so every assertion +# spawns a zsh subshell that sources prompt.zsh + cleanup.zsh and runs the +# function under test (matching the pattern in prereqs_test.bats). +# +# CKIPPER_NO_GUM=1 forces the pure-zsh fallback path inside _core_prompt_* +# helpers so tests are deterministic regardless of whether `gum` is installed +# on the runner. Tests pipe stdin to the zsh subshell so the embedded +# `_core_prompt_confirm` read receives "y" or "n". + +load "${BATS_TEST_DIRNAME}/../../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env +} + +teardown() { + teardown_isolated_env +} + +# Helper: source prompt.zsh + cleanup.zsh in zsh, feed stdin, run zsh_cmd. +# +# Args: $1 — stdin payload; $2 — zsh command to execute. +# Side effect: populates $status / $output / $lines as with bats `run`. +_run_cleanup() { + local stdin="$1" zsh_cmd="$2" + run env CKIPPER_NO_GUM=1 \ + HOME="$TMP_HOME" PATH="$PATH" \ + _CKIPPER_TEST_OSTYPE="${_CKIPPER_TEST_OSTYPE:-linux}" \ + zsh -c " + source \"$REPO_ROOT/lib/core/prompt.zsh\" + source \"$REPO_ROOT/lib/account/cleanup.zsh\" + $zsh_cmd + " <<<"$stdin" +} + +# ── _ckipper_account_cleanup_dir ───────────────────────────────────────────── + +@test "cleanup_dir deletes when user confirms" { + mkdir -p "$TMP_HOME/.claude-foo" + + _run_cleanup "y" "_ckipper_account_cleanup_dir foo \"$TMP_HOME/.claude-foo\"" + + [ "$status" -eq 0 ] + [ ! -d "$TMP_HOME/.claude-foo" ] +} + +@test "cleanup_dir keeps dir when user declines" { + mkdir -p "$TMP_HOME/.claude-foo" + + _run_cleanup "n" "_ckipper_account_cleanup_dir foo \"$TMP_HOME/.claude-foo\"" + + [ "$status" -eq 0 ] + [ -d "$TMP_HOME/.claude-foo" ] + [[ "$output" =~ "rm -rf" ]] +} + +@test "cleanup_dir is a no-op when dir does not exist" { + _run_cleanup "" "_ckipper_account_cleanup_dir foo \"$TMP_HOME/.claude-missing\"" + + [ "$status" -eq 0 ] + [[ "$output" != *"Delete config dir"* ]] +} + +# ── _ckipper_account_cleanup_keychain ──────────────────────────────────────── + +@test "cleanup_keychain is a no-op on non-darwin" { + _CKIPPER_TEST_OSTYPE=linux \ + _run_cleanup "y" "_ckipper_account_cleanup_keychain foo 'Claude Code-credentials'" + + [ "$status" -eq 0 ] + [[ "$output" != *"Deleted"* ]] + [[ "$output" != *"Delete Keychain"* ]] +} + +@test "cleanup_keychain is a no-op when service is empty" { + _CKIPPER_TEST_OSTYPE=darwin19.0 \ + _run_cleanup "y" "_ckipper_account_cleanup_keychain foo ''" + + [ "$status" -eq 0 ] + [[ "$output" != *"Deleted"* ]] + [[ "$output" != *"Delete Keychain"* ]] +} + +@test "cleanup_keychain calls security on darwin when user confirms" { + _CKIPPER_TEST_OSTYPE=darwin19.0 \ + _run_cleanup "y" "_ckipper_account_cleanup_keychain foo 'Claude Code-credentials'" + + [ "$status" -eq 0 ] + [[ "$output" =~ "Deleted" ]] +} + +@test "cleanup_keychain leaves entry when user declines on darwin" { + _CKIPPER_TEST_OSTYPE=darwin19.0 \ + _run_cleanup "n" "_ckipper_account_cleanup_keychain foo 'Claude Code-credentials'" + + [ "$status" -eq 0 ] + [[ "$output" != *"Deleted"* ]] + [[ "$output" =~ "security delete-generic-password" ]] +} diff --git a/lib/account/dispatcher.zsh b/lib/account/dispatcher.zsh new file mode 100644 index 0000000..568d8ac --- /dev/null +++ b/lib/account/dispatcher.zsh @@ -0,0 +1,168 @@ +#!/usr/bin/env zsh +# Account-namespace dispatcher and help text. +# +# Routes `ckipper account ` to the matching _ckipper_account_* +# function, prints overview/per-subcommand help, and suggests the closest +# subcommand on a typo via _core_fuzzy_suggest. + +# Known account subcommands. Used both for routing and for fuzzy-suggest. +# +# `repair-plugins` was retired in favour of `ckipper doctor --fix`. +_CKIPPER_ACCOUNT_SUBCOMMANDS=( + add list default remove rename sync redeploy-hooks help +) + +# Account-namespace renames. Used by _ckipper_account_unknown to print a +# rename hint instead of a bare unknown-command line, since fuzzy distance +# is too far for "sync-hooks" → "redeploy-hooks" to be auto-suggested. +typeset -gA _CKIPPER_ACCOUNT_LEGACY_COMMANDS=( + [sync-hooks]='redeploy-hooks' +) + +# Dispatch an `account` subcommand. +# +# Args: +# $1 — subcommand name (add, list, default, remove, rename, sync, +# redeploy-hooks, help, -h, --help, or empty) +# $2..$N — arguments forwarded to the subcommand handler +# +# Returns: 0 on success; 1 on unknown subcommand. +# +# Errors (stderr): +# "Unknown command: ''. Did you mean: ''? ..." +_ckipper_account_dispatch() { + local cmd="$1" + shift 2>/dev/null + case "$cmd" in + add|list|default|remove|rename|redeploy-hooks) + if [[ "$1" == "--help" || "$1" == "-h" ]]; then + _ckipper_account_help_for "$cmd" + return 0 + fi + "_ckipper_account_${cmd//-/_}" "$@" + ;; + sync) + _ckipper_account_sync_dispatch "$@" + ;; + ""|help|-h|--help) _ckipper_account_help ;; + *) _ckipper_account_unknown "$cmd"; return 1 ;; + esac +} + +# Print a rename hint for a retired account-namespace name, or fall through +# to the standard unknown-command + fuzzy-suggest path. Always writes to +# stderr. +# +# Args: $1 — the unknown subcommand the user typed. +# Returns: 0 always. +_ckipper_account_unknown() { + local cmd="$1" + if (( ${+_CKIPPER_ACCOUNT_LEGACY_COMMANDS[$cmd]} )); then + echo "'ckipper account $cmd' was renamed to 'ckipper account ${_CKIPPER_ACCOUNT_LEGACY_COMMANDS[$cmd]}' — pass the same arguments." >&2 + echo "Run 'ckipper account help' for the current command list." >&2 + return 0 + fi + _core_unknown_command "$cmd" \ + "Run 'ckipper account help' for available commands." \ + "${_CKIPPER_ACCOUNT_SUBCOMMANDS[@]}" +} + +# Print the account-namespace usage summary. +# +# Returns: 0 always. +_ckipper_account_help() { + _core_help_render "ckipper account — manage registered Claude accounts" \ + "" \ + "Usage:" \ + " ckipper account add Register a new account (interactive /login)" \ + " ckipper account add --adopt Register an existing populated config dir" \ + " ckipper account list Show registered accounts" \ + " ckipper account default Set the default account" \ + " ckipper account remove Unregister; prompts to delete dir + Keychain" \ + " ckipper account rename Rename an account in place" \ + " ckipper account sync [ ] Sync state peer-to-peer between accounts" \ + " ckipper account redeploy-hooks Redeploy install safety hooks to all accounts" \ + "" \ + "Short form: \`ckipper acct ...\` is equivalent." \ + "" \ + "Run \`ckipper account --help\` for per-subcommand details." +} + +# Per-subcommand help text router. Each arm prints a focused usage block. +# The `sync` arm forwards to the sync subsystem's own help text since that +# module owns its CLI surface. +# +# Args: $1 — subcommand name. +# Returns: 0 always. +_ckipper_account_help_for() { + case "$1" in + add) _ckipper_account_help_text_add ;; + list) _ckipper_account_help_text_list ;; + default) _ckipper_account_help_text_default ;; + remove) _ckipper_account_help_text_remove ;; + rename) _ckipper_account_help_text_rename ;; + sync) _ckipper_account_sync_help_text ;; + redeploy-hooks) _ckipper_account_help_text_redeploy_hooks ;; + esac +} + +_ckipper_account_help_text_add() { + _core_help_render "ckipper account add [--adopt]" \ + "" \ + "Register a new account. must match ^[a-z0-9_-]+$." \ + "" \ + "Without --adopt: creates ~/.claude-/ and walks you through /login." \ + "With --adopt: registers an existing populated ~/.claude-/ directory." +} + +_ckipper_account_help_text_list() { + _core_help_render "ckipper account list" \ + "" \ + "Print registered accounts: name, config dir, keychain service, default flag," \ + "and last-login email (read from each account's .claude.json, if present)." +} + +_ckipper_account_help_text_default() { + _core_help_render "ckipper account default " \ + "" \ + "Set the default account used when no \`--account\` flag and no" \ + "\$CLAUDE_CONFIG_DIR env var are provided." +} + +_ckipper_account_help_text_remove() { + _core_help_render "ckipper account remove " \ + "" \ + "Unregister an account from the registry and aliases, then interactively" \ + "prompt to delete the config dir and the macOS Keychain entry. Decline" \ + "either prompt to keep the file/entry; the manual cleanup command is shown." +} + +_ckipper_account_help_text_rename() { + _core_help_render "ckipper account rename " \ + "" \ + "Rename a registered account in place:" \ + " - Renames ~/.claude-/ → ~/.claude-/" \ + " - Updates the registry (key + config_dir)" \ + " - If was the default, makes the default" \ + " - Regenerates aliases.zsh and re-syncs hooks" \ + " - Refuses if any Claude session is running (so the dir isn't held open)" \ + "" \ + "Keychain service name is NOT changed — only the dir + registry mapping." +} + +_ckipper_account_help_text_redeploy_hooks() { + _core_help_render "ckipper account redeploy-hooks" \ + "" \ + "Redeploy ckipper-managed safety hooks from \$CKIPPER_DIR/hooks/ into" \ + "every registered account dir, then rewrite each account's settings.json" \ + "hook paths to absolute paths." \ + "" \ + "These hooks (bash-guardrails, protect-claude-config, docker-context," \ + "notify-bell) are docker safety guardrails — identical across every" \ + "account by construction. Run after editing any hook script under" \ + "\$CKIPPER_DIR/hooks/ so the change propagates to every account." \ + "" \ + "Note: this is NOT peer-to-peer sync. To sync user-written hooks" \ + "(scripts you authored that live outside the install set) between" \ + "accounts, use \`ckipper account sync --include hooks\`." +} diff --git a/lib/account/dispatcher_test.bats b/lib/account/dispatcher_test.bats new file mode 100644 index 0000000..73194ab --- /dev/null +++ b/lib/account/dispatcher_test.bats @@ -0,0 +1,99 @@ +#!/usr/bin/env bats +# Module-level tests for lib/account/dispatcher.zsh. +# Verifies routing, help, and fuzzy-suggest behaviour. + +load "${BATS_TEST_DIRNAME}/../../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env +} + +teardown() { + teardown_isolated_env +} + +# Helper: run dispatcher in a zsh subshell with all its dependencies +# (fuzzy.zsh, dispatcher.zsh) sourced and named subcommand handlers stubbed. +_run_dispatch() { + run env HOME="$TMP_HOME" PATH="$PATH" \ + zsh -c " + source \"$REPO_ROOT/lib/core/fuzzy.zsh\" + source \"$REPO_ROOT/lib/core/style.zsh\" + source \"$REPO_ROOT/lib/core/help.zsh\" + source \"$REPO_ROOT/lib/account/dispatcher.zsh\" + _ckipper_account_list() { echo 'STUB-LIST'; } + _ckipper_account_add() { echo 'STUB-ADD' \"\$@\"; } + _ckipper_account_dispatch $* + " +} + +@test "dispatch routes 'list' to _ckipper_account_list" { + _run_dispatch list + + [ "$status" -eq 0 ] + [ "$output" = "STUB-LIST" ] +} + +@test "dispatch routes 'add' with arguments to _ckipper_account_add" { + _run_dispatch add personal + + [ "$status" -eq 0 ] + [[ "$output" =~ "STUB-ADD personal" ]] +} + +@test "dispatch short-circuits 'add --help' to per-subcommand help" { + _run_dispatch add --help + + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper account add" ]] + [[ "$output" =~ "--adopt" ]] +} + +@test "dispatch short-circuits 'list -h' to per-subcommand help" { + _run_dispatch list -h + + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper account list" ]] +} + +@test "dispatch with no args prints overview help" { + _run_dispatch + + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper account" ]] + [[ "$output" =~ "Short form" ]] +} + +@test "dispatch with 'help' prints overview help" { + _run_dispatch help + + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper account" ]] +} + +@test "dispatch fuzzy-suggests on close typo" { + _run_dispatch lst + + [ "$status" -ne 0 ] + [[ "$output" =~ "Unknown command: 'lst'. Did you mean: 'list'?" ]] + [[ "$output" =~ "ckipper account help" ]] +} + +@test "dispatch prints bare unknown-command line on far-off typo" { + _run_dispatch xyzzy + + [ "$status" -ne 0 ] + [[ "$output" =~ "Unknown command: 'xyzzy'." ]] + [[ ! "$output" =~ "Did you mean" ]] +} + +# `sync-hooks` was renamed to `redeploy-hooks`. Old name must error AND +# point the user at the new name. Mirrors the top-level `_ckipper_unknown` +# legacy-command pattern in ckipper.zsh. +@test "dispatch points 'sync-hooks' at the new 'redeploy-hooks' name" { + _run_dispatch sync-hooks + + [ "$status" -ne 0 ] + [[ "$output" == *"sync-hooks"* ]] + [[ "$output" == *"redeploy-hooks"* ]] +} diff --git a/lib/account/doctor.zsh b/lib/account/doctor.zsh new file mode 100644 index 0000000..c5773d0 --- /dev/null +++ b/lib/account/doctor.zsh @@ -0,0 +1,559 @@ +#!/usr/bin/env zsh +# Diagnostic check subcommand: doctor (with --fix to apply repairs). +# +# Also owns plugin-metadata path-rewrite logic (formerly lib/account/plugin-repair.zsh): +# in --fix mode, doctor calls _ckipper_account_repair_plugins for any account whose +# plugin metadata has stale ~/.claude/ paths. The repair functions kept their +# original names so their existing tests work unchanged. + +readonly _CKIPPER_DOCTOR_MIN_HOOK_FILES=4 + +# Module-level counters shared across all doctor helpers. +typeset -g _CKIPPER_DOCTOR_FAIL=0 +typeset -g _CKIPPER_DOCTOR_WARN=0 +# Module-level fix-mode flag set by `_ckipper_doctor --fix` and consumed by +# per-account checks (e.g. plugin metadata) to decide warn-only vs. repair. +typeset -g _CKIPPER_DOCTOR_FIX_MODE="false" + +# Print a single check result and increment the appropriate counter. +# +# Uses _core_style_badge so badge color follows the project-wide style policy +# (NO_COLOR / TTY detection / CKIPPER_FORCE_COLOR override) instead of +# emitting raw ANSI codes that ignore the user's preferences. +# +# Args: +# $1 — symbol: PASS, WARN, FAIL, or INFO +# $2 — message text +# +# Returns: +# 0 always. +_ckipper_doctor_check() { + local sym="$1" msg="$2" + local badge + case "$sym" in + PASS) badge=$(_core_style_badge PASS green) ;; + WARN) badge=$(_core_style_badge WARN yellow); (( _CKIPPER_DOCTOR_WARN += 1 )) ;; + FAIL) badge=$(_core_style_badge FAIL red); (( _CKIPPER_DOCTOR_FAIL += 1 )) ;; + INFO) badge="[INFO]" ;; + esac + printf ' %s %s\n' "$badge" "$msg" +} + +# Check that all required ckipper tool files and hook files are deployed. +# +# Returns: +# 0 always (results printed via _ckipper_doctor_check). +_ckipper_doctor_tooling() { + _core_style_header "Tooling" + if [[ -d "$CKIPPER_DIR" ]]; then _ckipper_doctor_check PASS "$CKIPPER_DIR exists"; else _ckipper_doctor_check FAIL "$CKIPPER_DIR is missing — run install.sh"; fi + if [[ -f "$CKIPPER_DIR/docker/ckipper.zsh" ]]; then _ckipper_doctor_check PASS "ckipper.zsh deployed"; else _ckipper_doctor_check FAIL "ckipper.zsh missing in $CKIPPER_DIR/docker/"; fi + if [[ -f "$CKIPPER_DIR/docker/cleanup-projects.py" ]]; then _ckipper_doctor_check PASS "cleanup-projects.py deployed"; else _ckipper_doctor_check WARN "cleanup-projects.py missing — ckipper worktree rm cleanup will silently skip"; fi + if [[ -f "$CKIPPER_DIR/settings-template.json" ]]; then _ckipper_doctor_check PASS "settings-template.json deployed"; else _ckipper_doctor_check WARN "settings-template.json missing — ckipper account add will skip seeding settings.json"; fi + if [[ -d "$CKIPPER_DIR/hooks" ]] && (( $(ls -1 "$CKIPPER_DIR/hooks" 2>/dev/null | wc -l) >= _CKIPPER_DOCTOR_MIN_HOOK_FILES )); then + _ckipper_doctor_check PASS "hooks/ has ${_CKIPPER_DOCTOR_MIN_HOOK_FILES}+ files" + else + _ckipper_doctor_check WARN "hooks/ is missing or has fewer than $_CKIPPER_DOCTOR_MIN_HOOK_FILES hook files" + fi + _ckipper_doctor_check_stale_w_vars + _ckipper_doctor_check_config_keys + _ckipper_doctor_check_gum +} + +# Check that gum (charmbracelet/gum) is on PATH. Gum is a hard prereq for the +# setup wizard, sync wizard, and every interactive picker; without it ckipper +# falls back to read-prompt mode but loses keybindings, multi-select, and the +# spinner. Surface a missing install loudly so the user runs `brew install gum`. +# +# Returns: 0 always (results printed via _ckipper_doctor_check). +_ckipper_doctor_check_gum() { + if command -v gum >/dev/null 2>&1; then + _ckipper_doctor_check PASS "gum on PATH" + else + _ckipper_doctor_check FAIL "gum not on PATH — install via 'brew install gum'" + fi +} + +# Detect pre-merge W_* variable assignments in ckipper-config.zsh. +# +# Pre-merge installs used W_PROJECTS_DIR / W_PORTS / W_EXTRA_VOLUMES / +# W_EXTRA_ENV / W_WORKTREES_DIR. Post-merge ckipper.zsh only reads the +# CKIPPER_* names, so any leftover W_* assignment is silently ignored — and +# the user's customizations are lost. Surface this loudly. +# +# Returns: 0 always (results printed via _ckipper_doctor_check). +_ckipper_doctor_check_stale_w_vars() { + local cfg="$CKIPPER_DIR/docker/ckipper-config.zsh" + [[ -f "$cfg" ]] || return 0 + if grep -qE '^[[:space:]]*W_(PROJECTS_DIR|WORKTREES_DIR|PORTS|EXTRA_VOLUMES|EXTRA_ENV)[[:space:]]*=' "$cfg"; then + _ckipper_doctor_check FAIL "ckipper-config.zsh has stale W_* assignments — they're being ignored. Rename to CKIPPER_* (e.g. W_PORTS → CKIPPER_PORTS)." + fi +} + +# Validate every CKIPPER_= assignment in ckipper-config.zsh against the +# config schema (lib/core/schema.zsh). Unknown keys produce a WARN — they +# are likely typos that the source loader will silently set into a global +# variable that nothing reads. +# +# Honors a small allowlist for legacy power-user keys that are documented in +# the example file but intentionally absent from the schema (extra_volumes, +# extra_env). They predate the schema and remain as raw zsh arrays. +# +# Returns: 0 always (results printed via _ckipper_doctor_check). +_ckipper_doctor_check_config_keys() { + local file="${CKIPPER_DIR:-$HOME/.ckipper}/docker/ckipper-config.zsh" + [[ -f "$file" ]] || return 0 + local -a unknown + local -a power_user_keys=(extra_volumes extra_env) + local line key lower + while IFS= read -r line; do + [[ "$line" =~ ^[[:space:]]*CKIPPER_([A-Z0-9_]+)= ]] || continue + key="${match[1]}" + lower="${key:l}" + (( ${+_CKIPPER_SCHEMA_TYPE[$lower]} )) && continue + (( ${power_user_keys[(I)$lower]} )) && continue + unknown+=("CKIPPER_$key") + done < "$file" + if (( ${#unknown[@]} > 0 )); then + _ckipper_doctor_check WARN "unknown keys in ckipper-config.zsh: ${unknown[*]}" + else + _ckipper_doctor_check PASS "ckipper-config.zsh keys all known" + fi +} + +# Check registry version, permissions, and default account validity. +# +# Returns: +# 0 if registry exists and checks run; 1 if registry file is missing. +_ckipper_doctor_registry() { + echo "" + _core_style_header "Registry" + if [[ ! -f "$CKIPPER_REGISTRY" ]]; then + _ckipper_doctor_check INFO "No registry yet — no accounts registered. Run: ckipper account add " + return 1 + fi + local v; v=$(jq -r '.version // 0' "$CKIPPER_REGISTRY" 2>/dev/null) + if [[ "$v" == "$CKIPPER_REGISTRY_VERSION" ]]; then _ckipper_doctor_check PASS "registry version $v matches expected" + else _ckipper_doctor_check FAIL "registry version $v != expected $CKIPPER_REGISTRY_VERSION"; fi + local perms; perms=$(_core_stat_perms "$CKIPPER_REGISTRY") + if [[ "$perms" == "600" ]]; then _ckipper_doctor_check PASS "registry permissions 600" + else _ckipper_doctor_check WARN "registry permissions $perms (expected 600)"; fi + local default_acc; default_acc=$(jq -r '.default // ""' "$CKIPPER_REGISTRY") + if [[ -z "$default_acc" ]]; then + _ckipper_doctor_check WARN "no default account set — ckipper worktree run will require --account" + elif jq -e --arg n "$default_acc" '.accounts[$n]' "$CKIPPER_REGISTRY" >/dev/null 2>&1; then + _ckipper_doctor_check INFO "default account: $default_acc" + else + _ckipper_doctor_check FAIL "default account '$default_acc' is NOT in registry — fix with: ckipper account default " + fi + _ckipper_doctor_check_preferences +} + +# Build a jq sub-expression that checks `.value.preferences` has every +# required account-scope schema key. Returns "false" (a literal jq false) +# when no account-scope keys exist — matches the previous hardcoded behavior. +# +# Reads: _CKIPPER_SCHEMA_TYPE, _CKIPPER_SCHEMA_SCOPE. +# Returns: 0; emits the jq filter to stdout (e.g. +# `(.value.preferences | has("always_docker")) and ...`). +_ckipper_doctor_required_prefs_filter() { + local key filter="" + for key in "${(@ko)_CKIPPER_SCHEMA_TYPE}"; do + [[ "${_CKIPPER_SCHEMA_SCOPE[$key]}" == "account" ]] || continue + [[ -n "$filter" ]] && filter+=" and " + filter+="(.value.preferences | has(\"$key\"))" + done + echo "${filter:-true}" +} + +# Verify every registered account has the v2 `preferences` block with all +# account-scoped schema keys present. Migration runs at registry-load, but a +# user who hand-edits accounts.json between bumps can end up with a partial +# block — surface it. +# +# Emits WARN (not FAIL) listing the offending accounts. Migration will fix +# them on next registry-touch operation; this is a heads-up, not a halt. +# +# The list of required keys is derived from the schema at call time so +# adding a new account-scope key in lib/core/schema.zsh updates this +# check automatically. +# +# Returns: 0 always (results printed via _ckipper_doctor_check). +_ckipper_doctor_check_preferences() { + [[ -f "$CKIPPER_REGISTRY" ]] || return 0 + local required_filter + required_filter=$(_ckipper_doctor_required_prefs_filter) + local missing + missing=$(jq -r ' + .accounts | to_entries[] | + select( + .value.preferences == null or + (.value.preferences | type) != "object" or + (('"$required_filter"') | not) + ) | .key + ' "$CKIPPER_REGISTRY" 2>/dev/null) + if [[ -n "$missing" ]]; then + local list; list=$(echo "$missing" | paste -sd "," -) + _ckipper_doctor_check WARN "accounts missing preferences block: $list" + else + _ckipper_doctor_check PASS "accounts.json v2 preferences blocks valid" + fi +} + +# Detect whether an account's plugin metadata files contain stale ~/.claude/ paths. +# +# Args: +# $1 — account config directory +# +# Returns: +# 0 if stale paths found; 1 otherwise. +_ckipper_doctor_has_stale_plugin_metadata() { + local dir="$1" pm + for pm in known_marketplaces.json installed_plugins.json; do + [[ -f "$dir/plugins/$pm" ]] || continue + grep -q -- "$HOME/.claude/" "$dir/plugins/$pm" 2>/dev/null && return 0 + done + return 1 +} + +# Apply repair and report PASS/FAIL based on the post-repair state. +# +# Args: +# $1 — account name +# $2 — account config directory +# +# Returns: +# 0 always (results printed via _ckipper_doctor_check). +_ckipper_doctor_apply_plugin_repair() { + local name="$1" dir="$2" + _ckipper_account_repair_plugins "$name" >/dev/null 2>&1 + if _ckipper_doctor_has_stale_plugin_metadata "$dir"; then + _ckipper_doctor_check FAIL " plugins/*.json still has stale ~/.claude/ paths after repair attempt" + return 0 + fi + _ckipper_doctor_check PASS " plugin metadata repaired (stale ~/.claude/ paths rewritten)" +} + +# Check plugin metadata files for a single account for stale paths. +# +# In fix-mode (when _CKIPPER_DOCTOR_FIX_MODE is "true"), runs the in-place +# rewrite via _ckipper_account_repair_plugins and re-checks; emits PASS on +# successful repair. Otherwise emits WARN with a hint to run `doctor --fix`. +# +# Args: +# $1 — account name +# $2 — account config directory +# +# Returns: +# 0 always. +_ckipper_doctor_account_plugins() { + local name="$1" dir="$2" + _ckipper_doctor_has_stale_plugin_metadata "$dir" || return 0 + if [[ "$_CKIPPER_DOCTOR_FIX_MODE" = "true" ]]; then + _ckipper_doctor_apply_plugin_repair "$name" "$dir" + return 0 + fi + _ckipper_doctor_check WARN " plugins/*.json has stale ~/.claude/ paths — plugins will fail to load. Repair: ckipper doctor --fix" +} + +# Rewrite stale absolute paths embedded in Claude Code's plugin metadata files +# (known_marketplaces.json, installed_plugins.json). After moving an account +# directory (legacy ~/.claude → ~/.claude-), the plugin cache files on +# disk have moved with the rename, but the JSON metadata still has the old +# absolute paths baked in — Claude Code then fails to resolve plugins with +# "Plugin not found in marketplace ..." errors. +# +# Args: +# $1 — old prefix (must end with `/`), e.g. "$HOME/.claude/" +# $2 — new prefix (must end with `/`), e.g. "$HOME/.claude-personal/" +# +# Returns: +# 0 always (idempotent: if neither file contains old prefix, this is a no-op); +# 1 if arguments are invalid. +_ckipper_account_rewrite_plugin_paths() { + local old="$1" new="$2" + [[ -z "$old" || -z "$new" || "$old" != */ || "$new" != */ ]] && return 1 + [[ "$old" == "$new" ]] && return 0 + local f + for f in plugins/known_marketplaces.json plugins/installed_plugins.json; do + _ckipper_account_rewrite_single_plugin_file "$old" "$new" "$f" + done + return 0 +} + +# Rewrite stale paths in a single plugin metadata file using sed (in-place). +# Creates a timestamped backup before modifying the file. +# +# Args: +# $1 — old prefix (must end with `/`) +# $2 — new prefix (must end with `/`) +# $3 — relative plugin file path (e.g. plugins/known_marketplaces.json) +# +# Returns: +# 0 always (no-op if file absent or old prefix not found). +_ckipper_account_rewrite_single_plugin_file() { + local old="$1" new="$2" rel_path="$3" + local fp="$new$rel_path" + [[ -f "$fp" ]] || return 0 + grep -q -- "$old" "$fp" 2>/dev/null || return 0 + cp "$fp" "$fp.pre-rewrite-backup-$(date +%s)" + if [[ "${_CKIPPER_TEST_OSTYPE:-$OSTYPE}" == darwin* ]]; then + sed -i '' "s|$old|$new|g" "$fp" + else + sed -i "s|$old|$new|g" "$fp" + fi +} + +# Detect the stale prefix in the plugin metadata files for the given account. +# +# Args: +# $1 — account config directory path +# +# Returns: +# 0 always; prints stale prefix to stdout (empty if none found). +_ckipper_account_detect_stale_plugin_prefix() { + local dir="$1" + local f + for f in plugins/known_marketplaces.json plugins/installed_plugins.json; do + [[ -f "$dir/$f" ]] || continue + local hit + hit=$(grep -oE "$HOME/\.claude(-[a-z0-9_-]+)?/" "$dir/$f" 2>/dev/null \ + | sort -u | grep -v "^$dir/$" | head -1) + if [[ -n "$hit" ]]; then + printf '%s' "$hit" + return 0 + fi + done +} + +# Rewrite stale absolute paths in plugin metadata for a single registered account. +# +# This is internal to doctor's --fix path; the public `ckipper account +# repair-plugins` subcommand was retired in favour of `ckipper doctor --fix`. +# +# Args: +# $1 — registered account name +# +# Returns: +# 0 on success or when no repair is needed; 1 on error. +# +# Errors (stderr): +# "Usage: ckipper doctor --fix (account: required)" — when name is empty. +# "Account '...' is not registered." — when account not found. +# "Account dir does not exist: ..." — when directory is missing. +_ckipper_account_repair_plugins() { + local name="$1" + if [[ -z "$name" ]]; then + echo "Usage: _ckipper_account_repair_plugins (called from ckipper doctor --fix)" >&2 + return 1 + fi + _core_registry_check_version || return 1 + local dir + dir=$(jq -r --arg n "$name" '.accounts[$n].config_dir // empty' "$CKIPPER_REGISTRY") + if [[ -z "$dir" ]]; then + echo "Account '$name' is not registered. Run: ckipper account list" >&2 + return 1 + fi + if [[ ! -d "$dir" ]]; then + echo "Account dir does not exist: $dir" >&2 + return 1 + fi + _ckipper_account_repair_plugins_apply "$name" "$dir" +} + +# Apply stale-prefix repair to an account directory once validation has passed. +# +# Args: +# $1 — account name (for messages) +# $2 — account config directory +# +# Returns: +# 0 on success or when no repair is needed. +_ckipper_account_repair_plugins_apply() { + local name="$1" dir="$2" + local stale_prefix + stale_prefix=$(_ckipper_account_detect_stale_plugin_prefix "$dir") + if [[ -z "$stale_prefix" ]]; then + echo "No stale paths found in $dir/plugins/. Nothing to repair." + return 0 + fi + echo "Rewriting plugin metadata for '$name':" + echo " $stale_prefix → $dir/" + _ckipper_account_rewrite_plugin_paths "$stale_prefix" "$dir/" + echo "Done. Backups saved alongside each rewritten file (.pre-rewrite-backup-)." +} + +# Check macOS Keychain entry presence for a single account. +# +# Args: +# $1 — keychain service string (may be empty) +# $2 — account name +# +# Returns: +# 0 always (skips on non-darwin). +_ckipper_doctor_account_keychain() { + local svc="$1" name="$2" + [[ "${_CKIPPER_TEST_OSTYPE:-$OSTYPE}" != darwin* ]] && return 0 + if [[ -z "$svc" ]]; then + _ckipper_doctor_check INFO " keychain_service: null (account uses on-disk credentials)" + elif ! _core_keychain_validate "$svc"; then + _ckipper_doctor_check FAIL " keychain_service has invalid shape: $svc" + elif security find-generic-password -s "$svc" >/dev/null 2>&1; then + _ckipper_doctor_check PASS " keychain entry present: $svc" + else + _ckipper_doctor_check WARN " keychain entry NOT FOUND: $svc — re-run /login with: claude-$name" + fi +} + +# Check config directory, .claude.json, settings.json, hooks, and plugins for one account. +# +# Args: +# $1 — account name +# +# Returns: +# 0 always. +_ckipper_doctor_account() { + local name="$1" + echo "" + echo " Account: $name" + local dir; dir=$(jq -r --arg n "$name" '.accounts[$n].config_dir' "$CKIPPER_REGISTRY") + local svc; svc=$(jq -r --arg n "$name" '.accounts[$n].keychain_service // ""' "$CKIPPER_REGISTRY") + if [[ -d "$dir" ]]; then _ckipper_doctor_check PASS " dir exists: $dir" + else _ckipper_doctor_check FAIL " dir missing: $dir"; fi + if [[ -f "$dir/.claude.json" ]]; then + local email; email=$(jq -r '.oauthAccount.emailAddress // "(none)"' "$dir/.claude.json" 2>/dev/null) + local proj_count; proj_count=$(jq '.projects | length // 0' "$dir/.claude.json" 2>/dev/null) + local mcp_count; mcp_count=$(jq '.mcpServers | length // 0' "$dir/.claude.json" 2>/dev/null) + _ckipper_doctor_check PASS " .claude.json: oauth=$email, projects=$proj_count, mcps=$mcp_count" + else + _ckipper_doctor_check WARN " .claude.json missing in $dir" + fi + if [[ -f "$dir/settings.json" ]]; then _ckipper_doctor_check PASS " settings.json present"; else _ckipper_doctor_check WARN " settings.json missing"; fi + if [[ -d "$dir/hooks" ]]; then _ckipper_doctor_check PASS " hooks/ deployed"; else _ckipper_doctor_check WARN " hooks/ missing — run: ckipper account redeploy-hooks"; fi + _ckipper_doctor_account_plugins "$name" "$dir" + _ckipper_doctor_account_keychain "$svc" "$name" +} + +# Iterate registry accounts and run per-account checks. +# +# Returns: +# 0 always. +_ckipper_doctor_accounts() { + echo "" + _core_style_header "Per-account state" + local names; names=$(jq -r '.accounts | keys[]?' "$CKIPPER_REGISTRY") + if [[ -z "$names" ]]; then + _ckipper_doctor_check WARN "registry has no accounts" + return 0 + fi + local name + while IFS= read -r name; do + _ckipper_doctor_account "$name" + done <<< "$names" +} + +# Verify the generated aliases.zsh parses cleanly with `zsh -n`. A broken +# aliases file can land if disk fills mid-write or jq emits unexpected +# characters — the calling shell's `source` then prints errors and may leave +# the launcher functions undefined. +# +# Returns: 0 always (results printed via _ckipper_doctor_check). +_ckipper_doctor_check_aliases_parse() { + local f="$CKIPPER_DIR/aliases.zsh" + [[ -f "$f" ]] || return 0 + if zsh -n "$f" 2>/dev/null; then + _ckipper_doctor_check PASS "aliases.zsh parses cleanly" + else + _ckipper_doctor_check FAIL "aliases.zsh has parse errors — regenerate via 'ckipper account add/remove' or re-run install.sh" + fi +} + +# Check that the cached completion file matches the current +# CKIPPER_COMPLETION_VERSION. Stale files don't break ckipper but produce +# outdated tab-completions until the user starts a new zsh shell that re-runs +# the heredoc-regen block at the bottom of ckipper.zsh. +# +# Returns: 0 always (results printed via _ckipper_doctor_check). +_ckipper_doctor_check_completion() { + local cf="$HOME/.zsh/completions/_ckipper" + if [[ ! -f "$cf" ]]; then + _ckipper_doctor_check WARN "completion file ~/.zsh/completions/_ckipper missing — start a new zsh shell to regenerate" + return 0 + fi + if grep -q "# ckipper-completion-version=$CKIPPER_COMPLETION_VERSION" "$cf" 2>/dev/null; then + _ckipper_doctor_check PASS "completion file matches version $CKIPPER_COMPLETION_VERSION" + else + _ckipper_doctor_check WARN "completion file is stale (expected version $CKIPPER_COMPLETION_VERSION) — start a new zsh shell to regenerate" + fi +} + +# Check aliases.zsh and .zshrc integration lines, plus stub dir/file presence. +# +# Returns: +# 0 always. +_ckipper_doctor_shell() { + echo "" + _core_style_header "Aliases & shell integration" + if [[ -f "$CKIPPER_DIR/aliases.zsh" ]]; then _ckipper_doctor_check PASS "aliases.zsh exists at $CKIPPER_DIR/aliases.zsh" + else _ckipper_doctor_check WARN "aliases.zsh missing — will be regenerated on next add/remove"; fi + _ckipper_doctor_check_aliases_parse + if grep -q 'ckipper/aliases.zsh' "$HOME/.zshrc" 2>/dev/null; then _ckipper_doctor_check PASS "~/.zshrc sources aliases.zsh" + else _ckipper_doctor_check WARN "~/.zshrc does NOT source aliases.zsh — add: [[ -f ~/.ckipper/aliases.zsh ]] && source ~/.ckipper/aliases.zsh"; fi + if grep -q 'ckipper/docker/ckipper\.zsh' "$HOME/.zshrc" 2>/dev/null; then _ckipper_doctor_check PASS "~/.zshrc sources ckipper.zsh" + else _ckipper_doctor_check FAIL "~/.zshrc does NOT source ckipper.zsh — re-run install.sh"; fi + _ckipper_doctor_check_completion + echo "" + _core_style_header "Stub files (cosmetic)" + if [[ -d "$HOME/.claude" ]]; then + local stub_count; stub_count=$(ls -1A "$HOME/.claude" 2>/dev/null | wc -l | tr -d ' ') + _ckipper_doctor_check WARN "~/.claude exists ($stub_count files) — Claude Code may have recreated it. Safe to: rm -rf ~/.claude" + else + _ckipper_doctor_check PASS "~/.claude (stub dir) is absent" + fi + if [[ -f "$HOME/.claude.json" ]]; then _ckipper_doctor_check WARN "~/.claude.json exists at home root — leftover from a pre-ckipper claude install." + else _ckipper_doctor_check PASS "~/.claude.json (home root) is absent"; fi +} + +# Print the three-state summary line (FAIL / WARN-only / all-passed). +# +# Returns: +# 0 if no FAILs; 1 if any FAILs. +_ckipper_doctor_summary() { + echo "" + _core_style_divider + if (( _CKIPPER_DOCTOR_FAIL > 0 )); then + local fail_part warn_part + fail_part=$(_core_style_color red "$_CKIPPER_DOCTOR_FAIL FAIL") + warn_part=$(_core_style_color yellow "$_CKIPPER_DOCTOR_WARN WARN") + printf 'Result: %s, %s\n' "$fail_part" "$warn_part" + return 1 + fi + if (( _CKIPPER_DOCTOR_WARN > 0 )); then + printf 'Result: %s\n' "$(_core_style_color yellow "$_CKIPPER_DOCTOR_WARN WARN")" + return 0 + fi + printf 'Result: %s\n' "$(_core_style_color green "all checks passed")" +} + +# Run all diagnostic checks and print results to stdout. +# +# Args: +# $1 — optional `--fix` flag; when set, doctor applies in-place repairs for +# check categories that support it (currently: stale plugin metadata +# paths). Without --fix, the same checks emit WARN with a hint. +# +# Returns: +# 0 if all checks pass (warnings are non-fatal); 1 if any FAIL checks are found. +_ckipper_doctor() { + local should_fix="false" + [[ "$1" == "--fix" ]] && { should_fix="true"; shift; } + _CKIPPER_DOCTOR_FAIL=0 + _CKIPPER_DOCTOR_WARN=0 + _CKIPPER_DOCTOR_FIX_MODE="$should_fix" + _ckipper_doctor_tooling + if ! _ckipper_doctor_registry; then + return 0 + fi + _ckipper_doctor_accounts + _ckipper_doctor_shell + _ckipper_doctor_summary +} diff --git a/lib/account/doctor_test.bats b/lib/account/doctor_test.bats new file mode 100644 index 0000000..494b2bc --- /dev/null +++ b/lib/account/doctor_test.bats @@ -0,0 +1,326 @@ +#!/usr/bin/env bats +# Unit tests for lib/account/doctor.zsh helpers. +# Sources ckipper.zsh (which wires up all lib/core/ + lib/account/ modules). +# +# Doctor.zsh also owns plugin-metadata path-rewrite logic (formerly in +# lib/account/plugin-repair.zsh). The plugin-repair tests below were merged +# into this file when `ckipper account repair-plugins` was retired in favour +# of `ckipper doctor --fix`. + +load "${BATS_TEST_DIRNAME}/../../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env +} + +teardown() { + teardown_isolated_env +} + +# Helper: run a zsh expression with the full ckipper environment sourced. +run_helper() { + run env \ + HOME="$TMP_HOME" \ + CKIPPER_DIR="$CKIPPER_DIR" \ + CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + PATH="$PATH" \ + _CKIPPER_TEST_OSTYPE="linux" \ + CKIPPER_FORCE=1 \ + zsh -c "source \"$REPO_ROOT/ckipper.zsh\"; $*" +} + +# ── _ckipper_doctor_check ───────────────────────────────────────────── + +@test "doctor_check PASS prints PASS label without incrementing counters" { + run_helper '_CKIPPER_DOCTOR_FAIL=0; _CKIPPER_DOCTOR_WARN=0 + _ckipper_doctor_check PASS "all good" + echo "fail=$_CKIPPER_DOCTOR_FAIL warn=$_CKIPPER_DOCTOR_WARN"' + + [ "$status" -eq 0 ] + [[ "$output" =~ "PASS" ]] + [[ "$output" =~ "fail=0" ]] + [[ "$output" =~ "warn=0" ]] +} + +@test "doctor_check WARN prints WARN label and increments warn counter" { + run_helper '_CKIPPER_DOCTOR_FAIL=0; _CKIPPER_DOCTOR_WARN=0 + _ckipper_doctor_check WARN "something fishy" + echo "fail=$_CKIPPER_DOCTOR_FAIL warn=$_CKIPPER_DOCTOR_WARN"' + + [ "$status" -eq 0 ] + [[ "$output" =~ "WARN" ]] + [[ "$output" =~ "warn=1" ]] +} + +@test "doctor_check FAIL prints FAIL label and increments fail counter" { + run_helper '_CKIPPER_DOCTOR_FAIL=0; _CKIPPER_DOCTOR_WARN=0 + _ckipper_doctor_check FAIL "broken" + echo "fail=$_CKIPPER_DOCTOR_FAIL warn=$_CKIPPER_DOCTOR_WARN"' + + [ "$status" -eq 0 ] + [[ "$output" =~ "FAIL" ]] + [[ "$output" =~ "fail=1" ]] +} + +# ── _ckipper_doctor_summary ─────────────────────────────────────────── + +@test "doctor_summary returns 0 when no FAILs or WARNs" { + run_helper '_CKIPPER_DOCTOR_FAIL=0; _CKIPPER_DOCTOR_WARN=0 + _ckipper_doctor_summary' + + [ "$status" -eq 0 ] + [[ "$output" =~ "all checks passed" ]] +} + +@test "doctor_summary returns 0 when only WARNs (no FAILs)" { + run_helper '_CKIPPER_DOCTOR_FAIL=0; _CKIPPER_DOCTOR_WARN=2 + _ckipper_doctor_summary' + + [ "$status" -eq 0 ] + [[ "$output" =~ "WARN" ]] +} + +@test "doctor_summary returns 1 when there are FAILs" { + run_helper '_CKIPPER_DOCTOR_FAIL=1; _CKIPPER_DOCTOR_WARN=0 + _ckipper_doctor_summary' + + [ "$status" -ne 0 ] + [[ "$output" =~ "FAIL" ]] +} + +# ── _ckipper_doctor_accounts ────────────────────────────────────────── + +@test "doctor_accounts emits a WARN when the registry has no accounts" { + echo '{"version":1,"default":null,"accounts":{}}' > "$CKIPPER_REGISTRY" + + run_helper '_CKIPPER_DOCTOR_FAIL=0; _CKIPPER_DOCTOR_WARN=0 + _ckipper_doctor_accounts' + + [ "$status" -eq 0 ] + [[ "$output" =~ "WARN" ]] + [[ "$output" =~ "no accounts" ]] +} + +@test "doctor_accounts lists each registered account by name" { + local acc_dir="$TMP_HOME/.claude-work" + mkdir -p "$acc_dir" + echo '{"version":1,"default":"work","accounts":{"work":{"config_dir":"'"$acc_dir"'","keychain_service":null}}}' > "$CKIPPER_REGISTRY" + + run_helper '_CKIPPER_DOCTOR_FAIL=0; _CKIPPER_DOCTOR_WARN=0 + _ckipper_doctor_accounts' + + [ "$status" -eq 0 ] + [[ "$output" =~ "work" ]] +} + +# ── _ckipper_doctor_tooling ─────────────────────────────────────────── + +@test "doctor_tooling emits PASS when ckipper dir exists" { + # CKIPPER_DIR is already created by setup_isolated_env. + run_helper '_CKIPPER_DOCTOR_FAIL=0; _CKIPPER_DOCTOR_WARN=0 + _ckipper_doctor_tooling' + + [ "$status" -eq 0 ] + [[ "$output" =~ "PASS" ]] + [[ "$output" =~ "exists" ]] +} + +# ── plugin-metadata path rewrite (merged from plugin-repair_test.bats) ─ + +# ── _ckipper_account_detect_stale_plugin_prefix ────────────────────────────── + +@test "detect_stale_plugin_prefix finds old prefix in known_marketplaces.json" { + local acc_dir="$TMP_HOME/.claude-personal" + mkdir -p "$acc_dir/plugins" + # Write a marketplace file with a stale ~/.claude/ prefix. + printf '{"url":"%s/plugins/marketplace.json"}' "$TMP_HOME/.claude/" \ + > "$acc_dir/plugins/known_marketplaces.json" + + run_helper "_ckipper_account_detect_stale_plugin_prefix \"$acc_dir\"" + + [ "$status" -eq 0 ] + [[ "$output" =~ ".claude/" ]] +} + +# ── _ckipper_account_rewrite_plugin_paths ───────────────────────────────────── + +@test "rewrite_plugin_paths replaces old prefix with new prefix in plugin files" { + local old_dir="$TMP_HOME/.claude/" + local new_dir="$TMP_HOME/.claude-personal/" + mkdir -p "${new_dir}plugins" + # Seed a marketplace file using the old prefix path. + printf '{"url":"%s/plugins/marketplace.json"}' "$old_dir" \ + > "${new_dir}plugins/known_marketplaces.json" + + # Use darwin ostype so the code picks `sed -i ''` (macOS compatible form). + run env \ + HOME="$TMP_HOME" \ + CKIPPER_DIR="$CKIPPER_DIR" \ + CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + PATH="$PATH" \ + _CKIPPER_TEST_OSTYPE="darwin" \ + CKIPPER_FORCE=1 \ + zsh -c "source \"$REPO_ROOT/ckipper.zsh\"; _ckipper_account_rewrite_plugin_paths \"$old_dir\" \"$new_dir\"" + + [ "$status" -eq 0 ] + grep -q "$new_dir" "${new_dir}plugins/known_marketplaces.json" + ! grep -q "$old_dir" "${new_dir}plugins/known_marketplaces.json" +} + +@test "rewrite_plugin_paths is idempotent — running twice produces the same result" { + local old_dir="$TMP_HOME/.claude/" + local new_dir="$TMP_HOME/.claude-personal/" + mkdir -p "${new_dir}plugins" + printf '{"url":"%s/plugins/marketplace.json"}' "$old_dir" \ + > "${new_dir}plugins/known_marketplaces.json" + + # First run — replaces old prefix with new prefix. + run env \ + HOME="$TMP_HOME" CKIPPER_DIR="$CKIPPER_DIR" CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + PATH="$PATH" _CKIPPER_TEST_OSTYPE="darwin" CKIPPER_FORCE=1 \ + zsh -c "source \"$REPO_ROOT/ckipper.zsh\"; _ckipper_account_rewrite_plugin_paths \"$old_dir\" \"$new_dir\"" + local content_after_first; content_after_first=$(cat "${new_dir}plugins/known_marketplaces.json") + + # Second run — old prefix is gone so this is a no-op; output must be identical. + run env \ + HOME="$TMP_HOME" CKIPPER_DIR="$CKIPPER_DIR" CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + PATH="$PATH" _CKIPPER_TEST_OSTYPE="darwin" CKIPPER_FORCE=1 \ + zsh -c "source \"$REPO_ROOT/ckipper.zsh\"; _ckipper_account_rewrite_plugin_paths \"$old_dir\" \"$new_dir\"" + local content_after_second; content_after_second=$(cat "${new_dir}plugins/known_marketplaces.json") + + [ "$content_after_first" = "$content_after_second" ] +} + +# ── doctor --fix integration ────────────────────────────────────────── + +# Helper: seed registry + account dir + stale plugin metadata for --fix tests. +# Uses darwin ostype so the rewrite picks the macOS-compatible `sed -i ''` form, +# matching the host running these tests. +_seed_account_with_stale_plugins() { + local name="$1" + local acc_dir="$TMP_HOME/.claude-$name" + mkdir -p "$acc_dir/plugins" + # The fixture must use a prefix that differs from the new dir AND ends with `/`. + # Using $TMP_HOME/.claude/ guarantees both conditions in the isolated env. + printf '{"url":"%s/plugins/marketplace.json"}' "$TMP_HOME/.claude/" \ + > "$acc_dir/plugins/known_marketplaces.json" + printf '{"version":2,"default":"%s","accounts":{"%s":{"config_dir":"%s","keychain_service":null}}}' \ + "$name" "$name" "$acc_dir" > "$CKIPPER_REGISTRY" +} + +@test "doctor (no --fix) just WARNs on stale plugin paths and leaves the file unchanged" { + _seed_account_with_stale_plugins personal + local pm_file="$TMP_HOME/.claude-personal/plugins/known_marketplaces.json" + local before; before=$(cat "$pm_file") + + run_helper '_ckipper_doctor' + + [[ "$output" =~ "WARN" ]] + [[ "$output" =~ "stale" ]] + # File was NOT rewritten. + local after; after=$(cat "$pm_file") + [ "$before" = "$after" ] +} + +@test "doctor --fix repairs stale plugin paths and re-emits PASS for plugin metadata" { + _seed_account_with_stale_plugins personal + local pm_file="$TMP_HOME/.claude-personal/plugins/known_marketplaces.json" + + run env \ + HOME="$TMP_HOME" \ + CKIPPER_DIR="$CKIPPER_DIR" \ + CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + PATH="$PATH" \ + _CKIPPER_TEST_OSTYPE="darwin" \ + CKIPPER_FORCE=1 \ + zsh -c "source \"$REPO_ROOT/ckipper.zsh\"; _ckipper_doctor --fix" + + [[ "$output" =~ "PASS" ]] + [[ "$output" =~ "plugin metadata repaired" ]] + # File now references the new prefix and the seeded stale prefix + # ($TMP_HOME/.claude/) is gone. The rewrite produces a `.claude-/` + # substring (with a trailing slash from the new prefix), so we assert on + # the substring `.claude-personal` rather than a specific concatenation. + grep -q ".claude-personal" "$pm_file" + ! grep -q "$TMP_HOME/.claude/" "$pm_file" +} + +# ── _ckipper_doctor_check_preferences ───────────────────────────────── + +@test "doctor flags account missing preferences block" { + local acc_dir="$TMP_HOME/.claude-work" + mkdir -p "$acc_dir" + cat >"$CKIPPER_REGISTRY" <"$CKIPPER_REGISTRY" <"$CKIPPER_DIR/docker/ckipper-config.zsh" <<'CFG' +CKIPPER_NOTIFY_BELL=true +CKIPPER_TYPO_KEY=foo +CFG + + run_helper '_CKIPPER_DOCTOR_FAIL=0; _CKIPPER_DOCTOR_WARN=0 + _ckipper_doctor_check_config_keys' + + [ "$status" -eq 0 ] + [[ "$output" =~ "WARN" ]] + [[ "$output" =~ "CKIPPER_TYPO_KEY" ]] +} + +@test "doctor passes when every CKIPPER_ key in config file is in the schema" { + mkdir -p "$CKIPPER_DIR/docker" + cat >"$CKIPPER_DIR/docker/ckipper-config.zsh" <<'CFG' +CKIPPER_NOTIFY_BELL=true +CKIPPER_DEP_INSTALL_CMD="pnpm install" +CFG + + run_helper '_CKIPPER_DOCTOR_FAIL=0; _CKIPPER_DOCTOR_WARN=0 + _ckipper_doctor_check_config_keys' + + [ "$status" -eq 0 ] + [[ "$output" =~ "PASS" ]] + [[ "$output" =~ "all known" ]] +} + +@test "doctor accepts EXTRA_VOLUMES and EXTRA_ENV as power-user keys" { + mkdir -p "$CKIPPER_DIR/docker" + cat >"$CKIPPER_DIR/docker/ckipper-config.zsh" <<'CFG' +CKIPPER_EXTRA_VOLUMES=("foo:/foo") +CKIPPER_EXTRA_ENV=("BAR=baz") +CFG + + run_helper '_CKIPPER_DOCTOR_FAIL=0; _CKIPPER_DOCTOR_WARN=0 + _ckipper_doctor_check_config_keys' + + [ "$status" -eq 0 ] + [[ ! "$output" =~ "unknown keys" ]] + [[ ! "$output" =~ "WARN" ]] +} diff --git a/lib/account/sync/_shared.zsh b/lib/account/sync/_shared.zsh new file mode 100644 index 0000000..7ab169d --- /dev/null +++ b/lib/account/sync/_shared.zsh @@ -0,0 +1,76 @@ +#!/usr/bin/env zsh +# Strategy-shared helpers — content hashing, diff stats, JSON status compare. +# +# Sourced from ckipper.zsh BEFORE any strategy module so every strategy can +# call these without a sibling import. Keep this file dependency-free +# (no calls into other sync modules) so the load order stays trivial. + +# Per-target context shared across the engine, dispatcher, and preview +# modules. Single canonical declaration here — those three modules must NOT +# redeclare it (any `=()` in a redeclaration would silently reset state set +# by an earlier module). Module test files source this file before the +# module under test for the same reason. +# +# Keys: src_dir, dst_dir, src_name, dst_name, backup_dir, changeset, +# summaries, items. +typeset -gA _CKIPPER_SYNC_CTX + +# Compute sha256 of a file. Uses shasum (macOS- and Linux-friendly). +# +# Args: $1 — file path. +# Returns: 0; prints hex hash, or empty string if path is missing. +_ckipper_account_sync_hash_file() { + local f="$1" + [[ ! -f "$f" ]] && { echo ""; return 0; } + shasum -a 256 "$f" | cut -d' ' -f1 +} + +# Compute a content hash for a directory or symlink. For symlinks: the +# target path. For regular dirs: concatenated sha256 of every file in +# lexical order, hashed once more. +# +# Note: arg variable is `target` (not `path`) — zsh ties lowercase `path` to +# `$PATH` as an array, which corrupts the env if used as a local var. +# +# Args: $1 — path to directory or symlink. +# Returns: 0; prints hex hash, or empty string if path is missing. +_ckipper_account_sync_hash_dir() { + local target="$1" + [[ ! -e "$target" ]] && { echo ""; return 0; } + if [[ -L "$target" ]]; then + readlink "$target" | shasum -a 256 | cut -d' ' -f1 + return 0 + fi + [[ ! -d "$target" ]] && { echo ""; return 0; } + (cd "$target" && find . -type f -print0 2>/dev/null \ + | sort -z \ + | xargs -0 shasum -a 256 2>/dev/null) \ + | shasum -a 256 | cut -d' ' -f1 +} + +# Run a unified `diff` on two files and emit a "+A/-D" line-stat string. +# Used by file-shaped strategies (files-flat, hooks) for the preview summary. +# +# Args: $1 — destination file (left side); $2 — source file (right side). +# Returns: 0 always (diff exit 1 means "files differ", expected); prints "+A/-D". +_ckipper_account_sync_diff_line_stats() { + diff "$1" "$2" 2>/dev/null \ + | awk 'BEGIN{a=0;d=0} /^>/{a++} /^/.ckipper-sync-backups/-from-/ +# +# .ckipper-sync-manifest.json +# +# 0700 perms on every backup dir; 0600 on every backed-up file (mirrors +# the registry permission discipline in lib/core/registry.zsh). + +readonly _CKIPPER_SYNC_BACKUP_SUBDIR=".ckipper-sync-backups" +readonly _CKIPPER_SYNC_BACKUP_DIR_PERMS=700 +readonly _CKIPPER_SYNC_BACKUP_FILE_PERMS=600 +readonly _CKIPPER_SYNC_MANIFEST_FILE=".ckipper-sync-manifest.json" +readonly _CKIPPER_SYNC_MANIFEST_VERSION=1 + +# Compute the LIVE absolute path for a manifest entry. Most types live +# under /; prefs operates on $CKIPPER_REGISTRY (which is +# usually outside the destination account dir). Used by both apply_one +# (op-derivation in engine.zsh) and rollback_one so the two agree on +# what file is being touched. +# +# Args: $1 — type (may be empty for legacy callers); $2 — dst_dir; +# $3 — relpath from the manifest. +# Returns: 0; prints absolute path. +_ckipper_account_sync_live_path() { + local type="$1" dst_dir="$2" rel="$3" + case "$type" in + prefs) echo "$CKIPPER_REGISTRY" ;; + *) echo "$dst_dir/$rel" ;; + esac +} + +# Compute the backup-dir path (does NOT create it). Pure function; no IO. +# +# Args: $1 — destination account dir; $2 — source account name. +# Returns: 0; prints absolute path of the to-be-created backup dir. +_ckipper_account_sync_backup_dir_path() { + local dst_dir="$1" source_name="$2" + local ts; ts=$(date -u +"%Y-%m-%dT%H-%M-%SZ") + echo "$dst_dir/$_CKIPPER_SYNC_BACKUP_SUBDIR/$ts-from-$source_name" +} + +# Create the backup root dir for one sync invocation. Idempotent: a +# concurrent caller picking the same timestamp will see the existing dir. +# +# Args: $1 — destination account dir; $2 — source account name. +# Returns: 0; prints the created path on stdout. +_ckipper_account_sync_backup_create() { + local dst_dir="$1" source_name="$2" + local backup_dir + backup_dir=$(_ckipper_account_sync_backup_dir_path "$dst_dir" "$source_name") + mkdir -p "$backup_dir" + chmod "$_CKIPPER_SYNC_BACKUP_DIR_PERMS" "$backup_dir" + echo "$backup_dir" +} + +# Copy a single file or directory from the destination into the backup +# dir at the given relative path. No-op when the source path does not +# exist (i.e. operation is "create" — there's nothing to back up). +# Idempotent: a second call for the same rel within one invocation is a +# no-op so the original-state snapshot is preserved when two strategies +# (e.g. settings + statusline) write to the same destination file. +# +# Args: $1 — backup_dir; $2 — absolute source path; $3 — relative destination path. +# Returns: 0 on success or no-op; 1 if cp fails. +_ckipper_account_sync_backup_file() { + local backup_dir="$1" src="$2" rel="$3" + [[ ! -e "$src" ]] && return 0 + local dst="$backup_dir/$rel" + [[ -e "$dst" ]] && return 0 + mkdir -p "${dst:h}" + cp -a "$src" "$dst" || return 1 + [[ -f "$dst" ]] && chmod "$_CKIPPER_SYNC_BACKUP_FILE_PERMS" "$dst" + return 0 +} + +# Initialize the per-invocation manifest with an empty `files` array. +# +# Args: $1 — backup_dir; $2 — source name; $3 — target name. +# Returns: 0; writes manifest JSON to /. +_ckipper_account_sync_manifest_init() { + local backup_dir="$1" source_name="$2" target_name="$3" + local manifest="$backup_dir/$_CKIPPER_SYNC_MANIFEST_FILE" + local ts; ts=$(date -u +"%Y-%m-%dT%H:%M:%SZ") + jq -n --argjson v "$_CKIPPER_SYNC_MANIFEST_VERSION" \ + --arg s "$source_name" --arg t "$target_name" --arg ts "$ts" \ + '{version: $v, source: $s, target: $t, timestamp: $ts, files: []}' \ + > "$manifest" + chmod "$_CKIPPER_SYNC_BACKUP_FILE_PERMS" "$manifest" +} + +# Append one entry to the manifest. Items field is a comma-separated list +# (the strategy decides what counts as an item — JSON keys, file paths, etc.). +# +# Args: $1 — backup_dir; $2 — relative path; $3 — operation (create|overwrite); +# $4 — type id; $5 — items (comma-separated, optional). +# Returns: 0 on success; 1 on jq failure (tmp file is cleaned up on every path). +_ckipper_account_sync_manifest_append() { + local backup_dir="$1" rel="$2" op="$3" type="$4" items="${5:-}" + local manifest="$backup_dir/$_CKIPPER_SYNC_MANIFEST_FILE" + local tmp; tmp=$(mktemp "$manifest.XXXXXX") + if jq --arg p "$rel" --arg o "$op" --arg t "$type" --arg i "$items" \ + '.files += [{path: $p, operation: $o, type: $t, items: ($i | split(",") | map(select(length > 0)))}]' \ + "$manifest" > "$tmp"; then + mv "$tmp" "$manifest" && chmod "$_CKIPPER_SYNC_BACKUP_FILE_PERMS" "$manifest" + return $? + fi + rm -f "$tmp" + return 1 +} + +# List backup directories under , newest first. Returns absolute paths. +# Sort key is the directory basename — the ts prefix sorts lexicographically +# by design so plain `sort -r` gives newest-first ordering. +# +# Args: $1 — destination account dir. +# Returns: 0 always; prints absolute backup-dir paths, one per line. +_ckipper_account_sync_manifest_list_backups() { + local dst_dir="$1" + local root="$dst_dir/$_CKIPPER_SYNC_BACKUP_SUBDIR" + [[ ! -d "$root" ]] && return 0 + local d + for d in "$root"/*(N/); do + echo "$d" + done | sort -r +} + +# Roll back a single target by reversing every entry in its manifest: +# - operation=create → delete the file at the live path (the sync put it there) +# - operation=overwrite → restore from / via atomic rename +# +# The live path is normally /, but for type=prefs it's +# $CKIPPER_REGISTRY. _ckipper_account_sync_live_path encapsulates that. +# +# Best-effort per-entry: a missing backup or a permission error is logged +# (stderr) but does not stop subsequent entries from rolling back. +# +# Args: $1 — backup_dir for this target; $2 — destination account dir. +# Returns: 0 on full success; 1 if any per-entry rollback failed. +# Errors (stderr): "rollback failed: " — per-entry failures. +_ckipper_account_sync_rollback_target() { + local backup_dir="$1" dst_dir="$2" + local manifest="$backup_dir/$_CKIPPER_SYNC_MANIFEST_FILE" + [[ ! -f "$manifest" ]] && return 0 + local rc=0 + while IFS=$'\t' read -r op rel type; do + _ckipper_account_sync_rollback_one "$backup_dir" "$dst_dir" "$op" "$rel" "$type" || rc=1 + done < <(jq -r '.files[] | "\(.operation)\t\(.path)\t\(.type // "")"' "$manifest") + return $rc +} + +# Per-entry rollback helper. Kept separate so _rollback_target stays under +# the 25-line cap and the per-entry logic is independently unit-testable. +# +# Args: $1 — backup_dir; $2 — dst_dir; $3 — operation; $4 — relative path; +# $5 — type (optional; routes prefs to $CKIPPER_REGISTRY instead of dst_dir). +# Returns: 0 on success; 1 on rm/mv failure. +# Errors (stderr): "rollback failed: " +_ckipper_account_sync_rollback_one() { + local backup_dir="$1" dst_dir="$2" op="$3" rel="$4" type="${5:-}" + local live; live=$(_ckipper_account_sync_live_path "$type" "$dst_dir" "$rel") + if [[ "$op" == "create" ]]; then + rm -rf "$live" 2>/dev/null || { + echo "rollback failed: $rel — could not remove created file" >&2 + return 1 + } + return 0 + fi + local backup_path="$backup_dir/$rel" + [[ ! -e "$backup_path" ]] && return 0 + rm -rf "$live" 2>/dev/null + mkdir -p "${live:h}" + cp -a "$backup_path" "$live" || { + echo "rollback failed: $rel — could not restore from backup" >&2 + return 1 + } +} + +# Public undo entry — restore from a specific backup dir, then delete it. +# Caller is responsible for refusing if Claude is running on the destination +# (engine.zsh handles that gate). +# +# Args: $1 — backup_dir; $2 — destination account dir. +# Returns: 0 on full restore + cleanup; 1 if restore had failures +# (backup dir is preserved on partial failure for inspection). +_ckipper_account_sync_undo_from_backup() { + local backup_dir="$1" dst_dir="$2" + if ! _ckipper_account_sync_rollback_target "$backup_dir" "$dst_dir"; then + return 1 + fi + rm -rf "$backup_dir" + return 0 +} diff --git a/lib/account/sync/backup_test.bats b/lib/account/sync/backup_test.bats new file mode 100644 index 0000000..1f1a0e2 --- /dev/null +++ b/lib/account/sync/backup_test.bats @@ -0,0 +1,231 @@ +#!/usr/bin/env bats +# Unit tests for lib/account/sync/backup.zsh — backup creation, manifest, undo. + +load "${BATS_TEST_DIRNAME}/../../../tests/lib/test-helper.bash" + +setup() { setup_isolated_env; } +teardown() { teardown_isolated_env; } + +run_in_zsh() { + run env HOME="$HOME" CKIPPER_DIR="$CKIPPER_DIR" TMP_HOME="$TMP_HOME" \ + zsh -c "source \"$REPO_ROOT/lib/account/sync/backup.zsh\"; $*" +} + +@test "_ckipper_account_sync_backup_dir_path generates a UTC ISO timestamp" { + run_in_zsh ' + path=$(_ckipper_account_sync_backup_dir_path "/tmp/dst" "personal") + echo "$path"' + [ "$status" -eq 0 ] + [[ "$output" == */tmp/dst/.ckipper-sync-backups/*-from-personal* ]] +} + +@test "_ckipper_account_sync_backup_create makes the dir 0700" { + local dst="$TMP_HOME/dest" + mkdir -p "$dst" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' personal) + echo \"\$backup_dir\"" + [ "$status" -eq 0 ] + local dir; dir=$(echo "$output" | tail -1) + [[ -d "$dir" ]] + local mode; mode=$(stat -f '%Lp' "$dir" 2>/dev/null || stat -c '%a' "$dir" 2>/dev/null) + [[ "$mode" == "700" ]] +} + +@test "_ckipper_account_sync_backup_file copies a regular file with 0600" { + local dst="$TMP_HOME/dest" + mkdir -p "$dst/hooks" + echo "original content" > "$dst/hooks/foo.sh" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' personal) + _ckipper_account_sync_backup_file \"\$backup_dir\" '$dst/hooks/foo.sh' 'hooks/foo.sh' + cat \"\$backup_dir/hooks/foo.sh\"" + [ "$status" -eq 0 ] + [[ "$output" == *"original content"* ]] +} + +@test "_ckipper_account_sync_backup_file is a no-op for missing source (operation == create)" { + local dst="$TMP_HOME/dest" + mkdir -p "$dst" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' personal) + _ckipper_account_sync_backup_file \"\$backup_dir\" '$dst/does-not-exist' 'phantom' && echo OK" + [ "$status" -eq 0 ] + [[ "$output" == *"OK"* ]] +} + +@test "_ckipper_account_sync_backup_file is idempotent: second call preserves original snapshot" { + # Two strategies (e.g. settings + statusline) both back up settings.json. + # The first call must capture the pre-sync state; the second must NOT + # overwrite it with the post-first-write intermediate state, otherwise + # rollback restores a corrupted baseline. + local dst="$TMP_HOME/dest" + mkdir -p "$dst" + echo "ORIGINAL" > "$dst/settings.json" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' personal) + _ckipper_account_sync_backup_file \"\$backup_dir\" '$dst/settings.json' 'settings.json' + echo MODIFIED > '$dst/settings.json' + _ckipper_account_sync_backup_file \"\$backup_dir\" '$dst/settings.json' 'settings.json' + cat \"\$backup_dir/settings.json\"" + [[ "$output" == *"ORIGINAL"* ]] + [[ "$output" != *"MODIFIED"* ]] +} + +@test "_ckipper_account_sync_backup_file copies directories recursively (cp -a)" { + local dst="$TMP_HOME/dest" + mkdir -p "$dst/skills/foo" + echo "a" > "$dst/skills/foo/SKILL.md" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' personal) + _ckipper_account_sync_backup_file \"\$backup_dir\" '$dst/skills/foo' 'skills/foo' + cat \"\$backup_dir/skills/foo/SKILL.md\"" + [ "$status" -eq 0 ] + [[ "$output" == *"a"* ]] +} + +@test "_ckipper_account_sync_manifest_init writes a valid empty manifest" { + local dst="$TMP_HOME/dest" + mkdir -p "$dst" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' personal) + _ckipper_account_sync_manifest_init \"\$backup_dir\" personal work + cat \"\$backup_dir/.ckipper-sync-manifest.json\" | jq -r '.version, .source, .target'" + [ "$status" -eq 0 ] + [[ "$output" == *"1"* ]] + [[ "$output" == *"personal"* ]] + [[ "$output" == *"work"* ]] +} + +@test "_ckipper_account_sync_manifest_append cleans up its tmp file when jq fails" { + local dst="$TMP_HOME/dest" + mkdir -p "$dst" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' personal) + _ckipper_account_sync_manifest_init \"\$backup_dir\" personal work + # Corrupt the manifest so jq exits non-zero on the next append. + echo 'not-json' > \"\$backup_dir/.ckipper-sync-manifest.json\" + _ckipper_account_sync_manifest_append \"\$backup_dir\" 'x' overwrite mcp 'a' 2>/dev/null + # Tmp files match .ckipper-sync-manifest.json.XXXXXX in the backup dir. + ls \"\$backup_dir\"/.ckipper-sync-manifest.json.* 2>/dev/null | wc -l | tr -d ' '" + [[ "$output" == *"0"* ]] +} + +@test "_ckipper_account_sync_manifest_append adds an entry" { + local dst="$TMP_HOME/dest" + mkdir -p "$dst" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' personal) + _ckipper_account_sync_manifest_init \"\$backup_dir\" personal work + _ckipper_account_sync_manifest_append \"\$backup_dir\" 'settings.json' overwrite mcp 'github,vibma' + jq -r '.files | length' \"\$backup_dir/.ckipper-sync-manifest.json\"" + [ "$status" -eq 0 ] + [[ "$output" == *"1"* ]] +} + +@test "_ckipper_account_sync_manifest_list_backups sorts newest first" { + local dst="$TMP_HOME/dest" + mkdir -p "$dst/.ckipper-sync-backups/2026-01-01T00-00-00Z-from-a" + mkdir -p "$dst/.ckipper-sync-backups/2026-05-02T00-00-00Z-from-b" + mkdir -p "$dst/.ckipper-sync-backups/2026-03-15T00-00-00Z-from-c" + run_in_zsh "_ckipper_account_sync_manifest_list_backups '$dst' | head -1" + [ "$status" -eq 0 ] + [[ "$output" == *"2026-05-02T00-00-00Z-from-b"* ]] +} + +@test "_ckipper_account_sync_rollback_target restores backed-up files atomically" { + local dst="$TMP_HOME/dest" + mkdir -p "$dst/hooks" + echo "original" > "$dst/hooks/foo.sh" + + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' personal) + _ckipper_account_sync_manifest_init \"\$backup_dir\" personal work + _ckipper_account_sync_backup_file \"\$backup_dir\" '$dst/hooks/foo.sh' 'hooks/foo.sh' + _ckipper_account_sync_manifest_append \"\$backup_dir\" 'hooks/foo.sh' overwrite hooks foo.sh + echo modified > '$dst/hooks/foo.sh' + _ckipper_account_sync_rollback_target \"\$backup_dir\" '$dst' + cat '$dst/hooks/foo.sh'" + [ "$status" -eq 0 ] + [[ "$output" == *"original"* ]] + [[ "$output" != *"modified"* ]] +} + +@test "_ckipper_account_sync_rollback_target removes files marked operation=create" { + local dst="$TMP_HOME/dest" + mkdir -p "$dst/hooks" + + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' personal) + _ckipper_account_sync_manifest_init \"\$backup_dir\" personal work + _ckipper_account_sync_manifest_append \"\$backup_dir\" 'hooks/new.sh' create hooks new.sh + echo new-content > '$dst/hooks/new.sh' + _ckipper_account_sync_rollback_target \"\$backup_dir\" '$dst' + [[ -e '$dst/hooks/new.sh' ]] && echo STILL_THERE || echo GONE" + [[ "$output" == *"GONE"* ]] +} + +@test "_ckipper_account_sync_undo_from_backup restores and removes backup dir" { + local dst="$TMP_HOME/dest" + mkdir -p "$dst/hooks" + echo "original" > "$dst/hooks/foo.sh" + + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' personal) + _ckipper_account_sync_manifest_init \"\$backup_dir\" personal work + _ckipper_account_sync_backup_file \"\$backup_dir\" '$dst/hooks/foo.sh' 'hooks/foo.sh' + _ckipper_account_sync_manifest_append \"\$backup_dir\" 'hooks/foo.sh' overwrite hooks foo.sh + echo modified > '$dst/hooks/foo.sh' + _ckipper_account_sync_undo_from_backup \"\$backup_dir\" '$dst' + echo \"--\" + cat '$dst/hooks/foo.sh' + echo \"--\" + [[ -d \"\$backup_dir\" ]] && echo BACKUP_KEPT || echo BACKUP_REMOVED" + [[ "$output" == *"original"* ]] + [[ "$output" == *"BACKUP_REMOVED"* ]] +} + +# ── live_path + type-aware rollback (Bug G fix) ────────────────────────── + +@test "_ckipper_account_sync_live_path returns CKIPPER_REGISTRY for type=prefs" { + run_in_zsh " + export CKIPPER_REGISTRY='$TMP_HOME/.ckipper/accounts.json' + _ckipper_account_sync_live_path prefs '$TMP_HOME/dst' 'accounts.json'" + [[ "$output" == *".ckipper/accounts.json"* ]] + [[ "$output" != *"/dst/accounts.json"* ]] +} + +@test "_ckipper_account_sync_live_path returns dst/rel for non-prefs types" { + run_in_zsh "_ckipper_account_sync_live_path mcp '$TMP_HOME/dst' '.claude.json'" + [[ "$output" == *"$TMP_HOME/dst/.claude.json"* ]] +} + +@test "_ckipper_account_sync_live_path falls back to dst/rel when type is empty" { + run_in_zsh "_ckipper_account_sync_live_path '' '$TMP_HOME/dst' 'foo'" + [[ "$output" == *"$TMP_HOME/dst/foo"* ]] +} + +# Bug G: prefs rollback used to write to $dst_dir/accounts.json, NOT to +# $CKIPPER_REGISTRY — silently corrupting the registry restore. Fix passes +# the type field from the manifest through to rollback_one so prefs is +# correctly routed to the registry. +@test "_ckipper_account_sync_rollback_target restores prefs to CKIPPER_REGISTRY (Bug G)" { + local dst="$TMP_HOME/dest" + mkdir -p "$dst" "$TMP_HOME/.ckipper" + local registry="$TMP_HOME/.ckipper/accounts.json" + echo '{"version":2,"original":"yes"}' > "$registry" + + run env HOME="$HOME" CKIPPER_DIR="$CKIPPER_DIR" TMP_HOME="$TMP_HOME" \ + CKIPPER_REGISTRY="$registry" \ + zsh -c "source \"$REPO_ROOT/lib/account/sync/backup.zsh\"; \ + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' src); \ + _ckipper_account_sync_manifest_init \"\$backup_dir\" src dst; \ + _ckipper_account_sync_backup_file \"\$backup_dir\" '$registry' 'accounts.json'; \ + _ckipper_account_sync_manifest_append \"\$backup_dir\" 'accounts.json' overwrite prefs always_docker; \ + echo '{\"version\":2,\"corrupted\":\"yes\"}' > '$registry'; \ + _ckipper_account_sync_rollback_target \"\$backup_dir\" '$dst'; \ + jq -r '.original // \"missing\"' '$registry'; \ + [[ -e '$dst/accounts.json' ]] && echo STRAY_DST_FILE || echo NO_STRAY" + [[ "$output" == *"yes"* ]] + [[ "$output" == *"NO_STRAY"* ]] +} diff --git a/lib/account/sync/dispatcher.zsh b/lib/account/sync/dispatcher.zsh new file mode 100644 index 0000000..1ee9ef7 --- /dev/null +++ b/lib/account/sync/dispatcher.zsh @@ -0,0 +1,337 @@ +#!/usr/bin/env zsh +# Dispatcher for `ckipper account sync` and `ckipper account sync undo`. +# +# Owns argument parsing and high-level mode routing. Delegates to: +# - lib/account/sync/engine.zsh (build_change_set, apply_target) +# - lib/account/sync/preview.zsh (render_summary, drill_down_loop) +# - lib/account/sync/interactive.zsh (pickers when args are missing) +# - lib/account/sync/registry.zsh (resolve_includes, type validation) +# - lib/core/registry.zsh (_core_account_dir) + +typeset -g _CKIPPER_SYNC_FROM="" +typeset -ga _CKIPPER_SYNC_TARGETS=() +typeset -g _CKIPPER_SYNC_INCLUDE="" +typeset -g _CKIPPER_SYNC_EXCLUDE="" +typeset -g _CKIPPER_SYNC_DRY_RUN="false" +typeset -g _CKIPPER_SYNC_YES="false" +typeset -g _CKIPPER_SYNC_FORCE="false" + +# _CKIPPER_SYNC_CTX is declared in lib/account/sync/_shared.zsh — a single +# canonical declaration that the engine, dispatcher, and preview modules +# all share. Reset between invocations is handled by reset_args, not at +# declaration. + +# Reset all module-level _SYNC_* holders. Called at the top of every +# parse_args invocation so re-running the dispatcher in the same shell +# doesn't see stale state from the previous call. +# +# Returns: 0 always. +_ckipper_account_sync_reset_args() { + _CKIPPER_SYNC_FROM=""; _CKIPPER_SYNC_TARGETS=() + _CKIPPER_SYNC_INCLUDE=""; _CKIPPER_SYNC_EXCLUDE="" + _CKIPPER_SYNC_DRY_RUN="false"; _CKIPPER_SYNC_YES="false"; _CKIPPER_SYNC_FORCE="false" + _CKIPPER_SYNC_CTX=() +} + +# Append a positional arg to _CKIPPER_SYNC_FROM (first time) or _CKIPPER_SYNC_TARGETS (every +# subsequent positional). Extracted from parse_args so the case body stays +# at 2 levels of nesting per .claude/rules/code-style.md. +# +# Args: $1 — positional value. +# Returns: 0 always. +_ckipper_account_sync_accumulate_positional() { + [[ -z "$_CKIPPER_SYNC_FROM" ]] && { _CKIPPER_SYNC_FROM="$1"; return 0; } + _CKIPPER_SYNC_TARGETS+=("$1") +} + +# Parse `ckipper account sync` arguments into module-level _SYNC_* vars. +# Positional: [...]. Flags: --include/--exclude/--dry-run/--yes/--force. +# +# Args: $@ — raw argv after `ckipper account sync` is stripped. +# Returns: 0 on success; 1 on unknown flag; 2 if --help printed. +# Errors (stderr): "Unknown flag: " — when an unrecognized --foo appears. +_ckipper_account_sync_parse_args() { + _ckipper_account_sync_reset_args + while [[ $# -gt 0 ]]; do + case "$1" in + --include) _CKIPPER_SYNC_INCLUDE="$2"; shift 2 ;; + --exclude) _CKIPPER_SYNC_EXCLUDE="$2"; shift 2 ;; + --dry-run) _CKIPPER_SYNC_DRY_RUN="true"; shift ;; + --yes) _CKIPPER_SYNC_YES="true"; shift ;; + --force) _CKIPPER_SYNC_FORCE="true"; shift ;; + -h|--help) _ckipper_account_sync_help_text; return 2 ;; + --*) echo "Unknown flag: $1" >&2; return 1 ;; + *) _ckipper_account_sync_accumulate_positional "$1"; shift ;; + esac + done + return 0 +} + +# Top-level dispatcher. Routes to undo if first arg is "undo"; otherwise +# falls through to the sync flow. +# +# Args: $@ — args after `ckipper account sync`. +# Returns: 0 on success; 1 on user-visible failure. +_ckipper_account_sync_dispatch() { + if [[ "$1" == "undo" ]]; then + shift + _ckipper_account_sync_undo_dispatch "$@" + return $? + fi + _ckipper_account_sync_parse_args "$@" + local rc=$? + (( rc == 2 )) && return 0 + (( rc != 0 )) && return $rc + _ckipper_account_sync_run +} + +# Resolve missing positionals via interactive pickers, then run the engine +# for each target. +# +# Returns: 0 on success across all targets; 1 if any target failed. +_ckipper_account_sync_run() { + if [[ -z "$_CKIPPER_SYNC_FROM" ]]; then + _CKIPPER_SYNC_FROM=$(_ckipper_account_sync_pick_source) || return 1 + fi + if (( ${#_CKIPPER_SYNC_TARGETS} == 0 )); then + _CKIPPER_SYNC_TARGETS=( ${(f)"$(_ckipper_account_sync_pick_targets "$_CKIPPER_SYNC_FROM")"} ) + (( ${#_CKIPPER_SYNC_TARGETS} == 0 )) && return 1 + fi + _ckipper_account_sync_validate_accounts || return 1 + local -a types + types=( ${(f)"$(_ckipper_account_sync_resolve_types)"} ) + (( ${#types} == 0 )) && { echo "No types selected." >&2; return 1; } + _ckipper_account_sync_run_targets types +} + +# Walk every target and apply the resolved type list. +# +# Args: $1 — name of array variable holding type ids. +# Returns: 0 if every target succeeded; 1 if any failed. +_ckipper_account_sync_run_targets() { + local types_var="$1" + local -a types_local; types_local=( "${(@P)types_var}" ) + local rc=0 target + for target in "${_CKIPPER_SYNC_TARGETS[@]}"; do + _ckipper_account_sync_validate_pair "$_CKIPPER_SYNC_FROM" "$target" || { rc=1; continue; } + _ckipper_account_sync_run_one_target "$target" "${types_local[@]}" || rc=1 + done + return $rc +} + +# Validate that source + every target are registered accounts. +# +# Returns: 0 if all registered; 1 if any unknown (error printed by _core_account_dir). +_ckipper_account_sync_validate_accounts() { + _core_account_dir "$_CKIPPER_SYNC_FROM" >/dev/null || return 1 + local t + for t in "${_CKIPPER_SYNC_TARGETS[@]}"; do + _core_account_dir "$t" >/dev/null || return 1 + done +} + +# Resolve --include/--exclude into a final type list. Empty include with +# no flags drops to interactive type picker. +# +# Returns: 0; prints type ids one per line. +_ckipper_account_sync_resolve_types() { + if [[ -z "$_CKIPPER_SYNC_INCLUDE" && "$_CKIPPER_SYNC_YES" != "true" && "$_CKIPPER_SYNC_DRY_RUN" != "true" ]]; then + _ckipper_account_sync_pick_types + return 0 + fi + [[ -z "$_CKIPPER_SYNC_INCLUDE" ]] && _CKIPPER_SYNC_INCLUDE="all" + _ckipper_account_sync_resolve_includes "$_CKIPPER_SYNC_INCLUDE" "$_CKIPPER_SYNC_EXCLUDE" +} + +# One-target slice: build change set, render preview, prompt, apply. +# +# Args: $1 — target name; $2..$N — types. +# Returns: 0 on success; 1 on failure. +_ckipper_account_sync_run_one_target() { + local target="$1"; shift + local src_dir dst_dir + src_dir=$(_core_account_dir "$_CKIPPER_SYNC_FROM") + dst_dir=$(_core_account_dir "$target") + # Dry-run is read-only; the running-Claude refusal exists to prevent + # races with writes, so let preview-only invocations through. + if [[ "$_CKIPPER_SYNC_DRY_RUN" != "true" ]]; then + _ckipper_account_sync_assert_dst_idle "$dst_dir" "$_CKIPPER_SYNC_FORCE" || return 1 + fi + _ckipper_account_sync_prepare_target_artifacts "$target" "$src_dir" "$dst_dir" "$@" + local action="apply" + if [[ "$_CKIPPER_SYNC_DRY_RUN" != "true" && "$_CKIPPER_SYNC_YES" != "true" ]]; then + action=$(_ckipper_account_sync_preview_prompt) + fi + [[ "$_CKIPPER_SYNC_DRY_RUN" == "true" ]] && action="dry-run" + _ckipper_account_sync_finalize "$action" +} + +# Initialize per-target context and build the diff/summary artifacts. +# Steps: mktemp the changeset/summaries/items tmpfiles, populate _CKIPPER_SYNC_CTX, +# build the changeset and summaries TSVs, the drill-down items file, then +# render the preview block. +# +# Args: $1 — target name; $2 — src_dir; $3 — dst_dir; $4..$N — types. +# Returns: 0 always. +_ckipper_account_sync_prepare_target_artifacts() { + local target="$1" src_dir="$2" dst_dir="$3" + shift 3 + local changeset summaries items + changeset=$(mktemp); summaries=$(mktemp); items=$(mktemp) + _CKIPPER_SYNC_CTX=( + src_dir "$src_dir" dst_dir "$dst_dir" + src_name "$_CKIPPER_SYNC_FROM" dst_name "$target" + changeset "$changeset" summaries "$summaries" items "$items" + ) + _ckipper_account_sync_build_change_set "$src_dir" "$dst_dir" \ + "$_CKIPPER_SYNC_FROM" "$target" "$@" > "$changeset" + _ckipper_account_sync_build_summaries "$src_dir" "$dst_dir" \ + "$_CKIPPER_SYNC_FROM" "$target" < "$changeset" > "$summaries" + _ckipper_account_sync_drill_down_items < "$changeset" > "$items" + _ckipper_account_sync_show_preview "$target" "$dst_dir" "$changeset" "$summaries" +} + +# Render the preview block — header, summary table, change count. +# +# Args: $1 — target; $2 — dst_dir; $3 — changeset file; $4 — summaries file. +# Returns: 0 always. +_ckipper_account_sync_show_preview() { + local target="$1" dst_dir="$2" changeset="$3" summaries="$4" + local backup_path + backup_path=$(_ckipper_account_sync_backup_dir_path "$dst_dir" "$_CKIPPER_SYNC_FROM") + _ckipper_account_sync_render_summary "$_CKIPPER_SYNC_FROM" "$target" \ + "$backup_path" "$summaries" < "$changeset" + local counts; counts=$(_ckipper_account_sync_count_changes < "$changeset") + local total new ow; read -r total new ow <<< "$counts" + echo "$total changes ($new new, $ow overwrite)." +} + +# Apply / View changes / Abort prompt, with View looping back through +# the drill-down picker until Apply or Abort. +# +# Reads _CKIPPER_SYNC_CTX[dst_name] for the prompt label. Callers capture this +# function's stdout via $() to read the action token, so two precautions +# keep "$action" clean: (1) drill_down_loop is redirected to stderr — its +# diff output and "press enter" prompts are user-facing terminal output, +# not data; (2) `local choice` is declared once outside the loop, because +# re-declaring `local choice` inside the loop after the first iteration +# causes zsh to print the prior value as `choice='...'`, which would also +# leak into "$action". +# +# Returns: 0; prints "apply" | "abort". +_ckipper_account_sync_preview_prompt() { + local target="${_CKIPPER_SYNC_CTX[dst_name]}" + local choice="" + while true; do + choice=$(_core_prompt_choose "Apply changes to $target?" "Apply" "View changes" "Abort") + case "$choice" in + Apply) echo "apply"; return 0 ;; + Abort|"") echo "abort"; return 0 ;; + "View changes") _ckipper_account_sync_drill_down_loop >&2 ;; + esac + done +} + +# Run the chosen action, clean up tmpfiles, return the apply rc. Reads the +# tmpfile paths and apply args from _CKIPPER_SYNC_CTX. +# +# Args: $1 — action. +# Returns: 0 unless action=apply and the apply failed. +_ckipper_account_sync_finalize() { + local action="$1" + local rc=0 + if [[ "$action" == "apply" ]]; then + _ckipper_account_sync_apply_target \ + "${_CKIPPER_SYNC_CTX[src_dir]}" "${_CKIPPER_SYNC_CTX[dst_dir]}" \ + "${_CKIPPER_SYNC_CTX[src_name]}" "${_CKIPPER_SYNC_CTX[dst_name]}" \ + < "${_CKIPPER_SYNC_CTX[changeset]}" + rc=$? + fi + rm -f "${_CKIPPER_SYNC_CTX[changeset]}" "${_CKIPPER_SYNC_CTX[summaries]}" "${_CKIPPER_SYNC_CTX[items]}" + return $rc +} + +# Help text for `ckipper account sync`. +# +# Returns: 0 always. +_ckipper_account_sync_help_text() { + _core_help_render "ckipper account sync [] [...] [options]" \ + "" \ + "Sync state between registered Claude accounts. Empty positionals drop" \ + "into interactive pickers (gum-driven)." \ + "" \ + "Options:" \ + " --include Comma-separated types or named bundle:" \ + " all | customizations | claude-config | preferences" \ + " Type tokens: mcp, settings, claude-md, agents," \ + " commands, output-styles, skills, statusline," \ + " hooks, prefs" \ + " --exclude Subtract from --include." \ + " --dry-run Print summary, exit (no prompt, no writes)." \ + " --yes Skip the confirm prompt; apply directly." \ + " --force Bypass the running-Claude refusal." \ + "" \ + "Subcommand:" \ + " ckipper account sync undo [--pick | --list]" \ + "" \ + "Examples:" \ + " ckipper account sync (full wizard)" \ + " ckipper account sync personal work --include mcp" \ + " ckipper account sync personal work --include all --yes" \ + " ckipper account sync personal work client1 --include customizations" +} + +# Undo subcommand dispatcher. +# +# Force is read from a LOCAL var, not the module-level _CKIPPER_SYNC_FORCE. The +# previous design read _CKIPPER_SYNC_FORCE directly, which leaked across +# invocations: a prior `sync ... --force` left _CKIPPER_SYNC_FORCE="true" in the +# shell, and a subsequent `sync undo` (without --force) inherited it, +# silently bypassing the running-Claude refusal. parse_args resets +# _CKIPPER_SYNC_FORCE on the sync path, but the undo path skips parse_args. +# +# Args: $1 — account name; flags: --pick | --list | --force. +# Returns: 0 on success; 1 on user-visible failure. +# Errors (stderr): "Usage: ckipper account sync undo " — when the +# account positional is missing; "Unknown flag: " — when an +# unrecognized --foo is passed. +_ckipper_account_sync_undo_dispatch() { + local account="$1"; shift 2>/dev/null + [[ -z "$account" ]] && { echo "Usage: ckipper account sync undo " >&2; return 1; } + local dst_dir; dst_dir=$(_core_account_dir "$account") || return 1 + local mode="latest" force="false" + while [[ $# -gt 0 ]]; do + case "$1" in + --pick) mode="pick"; shift ;; + --list) mode="list"; shift ;; + --force) force="true"; shift ;; + *) echo "Unknown flag: $1" >&2; return 1 ;; + esac + done + _ckipper_account_sync_assert_dst_idle "$dst_dir" "$force" || return 1 + _ckipper_account_sync_undo_run "$account" "$dst_dir" "$mode" +} + +# Run the chosen undo mode (latest / pick / list). +# +# Args: $1 — account name; $2 — dst_dir; $3 — mode (latest|pick|list). +# Returns: 0 on success; 1 on failure or no backups. +_ckipper_account_sync_undo_run() { + local account="$1" dst_dir="$2" mode="$3" + local -a backups + backups=( ${(f)"$(_ckipper_account_sync_manifest_list_backups "$dst_dir")"} ) + if (( ${#backups} == 0 )); then + echo "No backups for $account." + return 1 + fi + local target_backup="${backups[1]}" + if [[ "$mode" == "list" ]]; then + printf '%s\n' "${backups[@]}" + return 0 + fi + if [[ "$mode" == "pick" ]]; then + target_backup=$(_core_prompt_choose "Pick a backup to restore" "${backups[@]}") + [[ -z "$target_backup" ]] && return 1 + fi + _ckipper_account_sync_undo_from_backup "$target_backup" "$dst_dir" +} diff --git a/lib/account/sync/dispatcher_test.bats b/lib/account/sync/dispatcher_test.bats new file mode 100644 index 0000000..735df2e --- /dev/null +++ b/lib/account/sync/dispatcher_test.bats @@ -0,0 +1,212 @@ +#!/usr/bin/env bats +# Unit tests for lib/account/sync/dispatcher.zsh — arg parsing skeleton. + +load "${BATS_TEST_DIRNAME}/../../../tests/lib/test-helper.bash" + +setup() { setup_isolated_env; } +teardown() { teardown_isolated_env; } + +run_in_zsh() { + run env CKIPPER_DIR="$CKIPPER_DIR" \ + zsh -c "source \"$REPO_ROOT/lib/account/sync/_shared.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/registry.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/dispatcher.zsh\"; $*" +} + +@test "parse_args identifies --dry-run flag" { + run_in_zsh ' + _ckipper_account_sync_parse_args personal work --dry-run + echo "from=$_CKIPPER_SYNC_FROM" + echo "targets=${_CKIPPER_SYNC_TARGETS[*]}" + echo "dry_run=$_CKIPPER_SYNC_DRY_RUN"' + [[ "$output" == *"from=personal"* ]] + [[ "$output" == *"targets=work"* ]] + [[ "$output" == *"dry_run=true"* ]] +} + +@test "parse_args identifies --yes flag" { + run_in_zsh ' + _ckipper_account_sync_parse_args personal work --yes + echo "yes=$_CKIPPER_SYNC_YES"' + [[ "$output" == *"yes=true"* ]] +} + +@test "parse_args identifies multiple targets" { + run_in_zsh ' + _ckipper_account_sync_parse_args personal work client1 client2 + echo "targets=${(j:,:)_CKIPPER_SYNC_TARGETS}"' + [[ "$output" == *"targets=work,client1,client2"* ]] +} + +@test "parse_args identifies --include with comma list" { + run_in_zsh ' + _ckipper_account_sync_parse_args personal work --include mcp,settings + echo "include=$_CKIPPER_SYNC_INCLUDE"' + [[ "$output" == *"include=mcp,settings"* ]] +} + +@test "parse_args identifies --exclude with comma list" { + run_in_zsh ' + _ckipper_account_sync_parse_args personal work --include all --exclude prefs + echo "include=$_CKIPPER_SYNC_INCLUDE" + echo "exclude=$_CKIPPER_SYNC_EXCLUDE"' + [[ "$output" == *"include=all"* ]] + [[ "$output" == *"exclude=prefs"* ]] +} + +@test "parse_args identifies --force" { + run_in_zsh ' + _ckipper_account_sync_parse_args personal work --force + echo "force=$_CKIPPER_SYNC_FORCE"' + [[ "$output" == *"force=true"* ]] +} + +@test "parse_args returns 1 on unknown flag" { + run_in_zsh '_ckipper_account_sync_parse_args personal work --bogus' + [ "$status" -ne 0 ] +} + +@test "parse_args allows empty positionals (drop-to-picker)" { + run_in_zsh ' + _ckipper_account_sync_parse_args + echo "from=${_CKIPPER_SYNC_FROM:-EMPTY}" + echo "n_targets=${#_CKIPPER_SYNC_TARGETS[@]}"' + [[ "$output" == *"from=EMPTY"* ]] + [[ "$output" == *"n_targets=0"* ]] +} + +# ── Integration: end-to-end dispatch with seeded accounts ──────────────── + +setup_two_accounts() { + cat > "$CKIPPER_REGISTRY" < "$TMP_HOME/src/.claude.json" + echo '{"mcpServers":{}}' > "$TMP_HOME/dst/.claude.json" +} + +run_full() { + run env HOME="$HOME" CKIPPER_DIR="$CKIPPER_DIR" CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + CKIPPER_NO_GUM=1 CKIPPER_FORCE=1 TMP_HOME="$TMP_HOME" PATH="$PATH" \ + zsh -c "source \"$REPO_ROOT/ckipper.zsh\"; $*" +} + +@test "ckipper account sync src dst --include mcp --yes applies the merge" { + setup_two_accounts + run_full 'ckipper account sync src dst --include mcp --yes' + [ "$status" -eq 0 ] + local merged + merged=$(jq -r '.mcpServers.github.command' "$TMP_HOME/dst/.claude.json") + [[ "$merged" == "x" ]] +} + +@test "ckipper account sync src dst --include mcp --dry-run does not apply" { + setup_two_accounts + run_full 'ckipper account sync src dst --include mcp --dry-run' + [ "$status" -eq 0 ] + local n; n=$(jq '.mcpServers | length' "$TMP_HOME/dst/.claude.json") + [[ "$n" == "0" ]] +} + +# Regression: dry-run is read-only; the running-Claude refusal exists to +# prevent races with writes. A user previewing changes while their session +# is still open must not be blocked. +@test "ckipper account sync --dry-run is not blocked when Claude is running on dst" { + setup_two_accounts + run_full ' + # Stub: pretend claude is running and its argv includes the dst path. + _core_running_claude_processes() { echo "12345 claude $TMP_HOME/dst"; } + ckipper account sync src dst --include mcp --dry-run' + [ "$status" -eq 0 ] + [[ "$output" != *"Refusing to sync"* ]] +} + +# Companion check: without --dry-run (and without --force), the same scenario +# must still abort — the dry-run skip is the only carve-out. +@test "ckipper account sync without --dry-run IS blocked when Claude is running on dst" { + setup_two_accounts + run_full ' + _core_running_claude_processes() { echo "12345 claude $TMP_HOME/dst"; } + ckipper account sync src dst --include mcp --yes' + [ "$status" -ne 0 ] + [[ "$output" == *"Refusing to sync"* ]] +} + +@test "ckipper account sync rejects unregistered source" { + setup_two_accounts + run_full 'ckipper account sync ghost dst --include mcp --yes' + [ "$status" -ne 0 ] +} + +@test "ckipper account sync src src is rejected (identity)" { + setup_two_accounts + run_full 'ckipper account sync src src --include mcp --yes' + [ "$status" -ne 0 ] +} + +# Bug B: undo_dispatch used to read $_CKIPPER_SYNC_FORCE directly, which leaked +# state from a prior sync invocation in the same shell. A user who ran +# `sync ... --force` (setting _CKIPPER_SYNC_FORCE=true) and then ran `sync undo` +# without --force would silently bypass the running-Claude refusal because +# parse_args resets _CKIPPER_SYNC_FORCE only on the sync path. Fix: undo uses a +# local force var, defaulting to false. +@test "sync undo does NOT inherit --force from a prior sync invocation (Bug B)" { + setup_two_accounts + run_full ' + # Apply a sync first WITH --force so _CKIPPER_SYNC_FORCE leaks to module state. + _core_running_claude_processes() { return 0; } + ckipper account sync src dst --include mcp --yes --force >/dev/null 2>&1 + # Now undo without --force; running Claude should block it again. + _core_running_claude_processes() { echo "12345 claude"; } + ckipper account sync undo dst' + [ "$status" -ne 0 ] + [[ "$output" == *"Refusing to sync"* ]] +} + +# Bug B (companion): sync undo --force still works on its own. +@test "sync undo --force bypasses the running-Claude refusal" { + setup_two_accounts + # Need at least one backup to exist for undo to do anything past the gate. + run_full ' + _core_running_claude_processes() { return 0; } + ckipper account sync src dst --include mcp --yes >/dev/null 2>&1 + _core_running_claude_processes() { echo "12345 claude"; } + ckipper account sync undo dst --force' + [ "$status" -eq 0 ] +} + +# Regression: when the user picks "View changes" then "Apply", the diff +# output written by drill_down_loop must NOT pollute the captured action, +# else the [[ "$action" == "apply" ]] check downstream silently skips apply. +# The mock _core_prompt_choose persists state via a flag file because each +# choice=$(...) call inside preview_prompt opens a fresh subshell. +@test "preview_prompt View changes then Apply yields exactly 'apply'" { + run_in_zsh ' + _CKIPPER_SYNC_FROM=src + items=$(mktemp); echo "x" > "$items" + _CKIPPER_SYNC_CTX[dst_name]=dst + _CKIPPER_SYNC_CTX[items]=$items + export _PROMPT_FLAG=$(mktemp) + _core_prompt_choose() { + if [[ -e "$_PROMPT_FLAG" ]]; then + rm "$_PROMPT_FLAG" + echo "View changes" + else + echo "Apply" + fi + } + _ckipper_account_sync_drill_down_loop() { + echo "── source diff ──" + echo "+++ added line" + echo "(Press enter to return to picker)" + } + action=$(_ckipper_account_sync_preview_prompt) + rm -f "$items" + echo "ACTION=[$action]"' + [[ "$output" == *"ACTION=[apply]"* ]] +} diff --git a/lib/account/sync/engine.zsh b/lib/account/sync/engine.zsh new file mode 100644 index 0000000..4982da8 --- /dev/null +++ b/lib/account/sync/engine.zsh @@ -0,0 +1,233 @@ +#!/usr/bin/env zsh +# Sync engine — type-agnostic main loop. +# +# This module implements the source × targets × types loop. It NEVER +# references concrete type semantics (MCP servers, agents, etc.) — instead +# it dispatches to per-type strategy functions following a fixed naming +# convention. Adding a new type does NOT require touching this file. +# +# ── Strategy contract ──────────────────────────────────────────────────── +# +# Every type registered in lib/account/sync/registry.zsh MUST implement +# these five functions, named `_ckipper_account_sync__`: +# +# _enumerate +# List every syncable item the source has, one per line, as +# "iddisplay". `id` is whatever the apply/compare/diff +# functions need to look the item up; `display` is the picker label. +# Empty stdout means "nothing to sync" (no items in source). +# +# _compare +# Print one of: "new" | "overwrite" | "unchanged". +# +# _summary +# Print a one-line summary of the change for the preview table +# (e.g. "+12/-3 lines", "command changed", "false → true"). +# +# _diff +# Print a full diff for drill-down view. Files use `diff -u`; +# JSON values use side-by-side `jq` pretty-print. May be empty for +# new items (drill-down skips status==new in the preview UI). +# +# _apply +# Perform the merge. MUST call _ckipper_account_sync_backup_file +# before any destructive write. Returns 0 on success; non-zero on +# failure (engine then triggers per-target rollback). +# +# All five functions take their arguments in the same order so the engine +# can call them through _ckipper_account_sync_strategy_fn uniformly. + +# Compute the strategy function name for a (type, verb) pair. +# +# Args: $1 — type id (e.g. "mcp", "claude-md"); $2 — verb (enumerate, compare, +# summary, diff, apply). +# Returns: 0; prints the function name (e.g. "_ckipper_account_sync_mcp_enumerate"). +_ckipper_account_sync_strategy_fn() { + local type="$1" verb="$2" + echo "_ckipper_account_sync_${type}_${verb}" +} + +# Refuse to sync when any Claude CLI is running. +# +# We previously tried to filter by "Claude running on this destination dir," +# but that requires reading another process's CLAUDE_CONFIG_DIR env var — +# macOS does not expose that to non-privileged callers (`ps -E` is a no-op +# for foreign processes; `pgrep -lx` only shows PID + basename). The only +# reliable signal we have is "is any claude CLI running at all," so we +# refuse on that. Coarser than designed, but the original sync.zsh on +# develop did the same; --force is the documented escape hatch. +# +# Args: $1 — destination dir (kept in the signature for forward +# compatibility once we have a dst-specific signal); $2 — force flag +# ("true" | "false"). +# Returns: 0 if safe to proceed; 1 if any Claude CLI is running (unless force). +# Errors (stderr): multiline message identifying the running process(es). +_ckipper_account_sync_assert_dst_idle() { + local dst_dir="$1" force="$2" + [[ "$force" == "true" ]] && return 0 + local procs; procs=$(_core_running_claude_processes 2>/dev/null) + [[ -z "$procs" ]] && return 0 + { + echo "Refusing to sync: a Claude CLI process is running." + echo "$procs" | sed 's/^/ /' + echo "" + echo "Quit running Claude (or pass --force to override; risk of file races)." + } >&2 + return 1 +} + +# Validate a single (source, target) pair. Identity check only; account +# existence is handled by _core_account_dir from lib/core/registry.zsh +# at the dispatcher layer. +# +# Args: $1 — source name; $2 — target name. +# Returns: 0 if valid; 1 if names match. +# Errors (stderr): "Source and target must differ: " +_ckipper_account_sync_validate_pair() { + local src="$1" tgt="$2" + if [[ "$src" == "$tgt" ]]; then + echo "Source and target must differ: $src" >&2 + return 1 + fi + return 0 +} + +# Build the per-(target, type) change set for a single sync slice. +# For each type, runs that strategy's enumerate then compare per item; +# emits one TSV row per item with the resulting status appended. +# +# Args: $1 — src dir; $2 — dst dir; $3 — src account name; $4 — dst account name; +# $5..$N — type ids to walk (already resolved from --include/--exclude). +# Returns: 0 always; prints "\t\t\t" per line. +_ckipper_account_sync_build_change_set() { + local src_dir="$1" dst_dir="$2" src_name="$3" dst_name="$4" + shift 4 + local type + for type in "$@"; do + _ckipper_account_sync_walk_type "$type" "$src_dir" "$dst_dir" "$src_name" "$dst_name" + done +} + +# Walk a single type's items. Types in _CKIPPER_SYNC_TYPE_USES_NAMES use +# account names instead of dirs. +# +# Args: $1 — type; $2 — src_dir; $3 — dst_dir; $4 — src_name; $5 — dst_name. +# Returns: 0 always; prints rows. +_ckipper_account_sync_walk_type() { + local type="$1" src_dir="$2" dst_dir="$3" src_name="$4" dst_name="$5" + local enumerate_fn compare_fn + enumerate_fn=$(_ckipper_account_sync_strategy_fn "$type" enumerate) + compare_fn=$(_ckipper_account_sync_strategy_fn "$type" compare) + local arg_a="$src_dir" arg_b="$dst_dir" + (( ${+_CKIPPER_SYNC_TYPE_USES_NAMES[$type]} )) && { arg_a="$src_name"; arg_b="$dst_name"; } + local id display change_status + while IFS=$'\t' read -r id display; do + [[ -z "$id" ]] && continue + change_status=$("$compare_fn" "$arg_a" "$arg_b" "$id") + echo "$type"$'\t'"$id"$'\t'"$display"$'\t'"$change_status" + done < <("$enumerate_fn" "$arg_a") +} + +# Apply a change set to a single target. Steps: +# 1. Create backup dir + manifest, populate _CKIPPER_SYNC_CTX[backup_dir]. +# 2. For each change, call the strategy's apply (which itself calls +# _ckipper_account_sync_backup_file before writing). +# 3. On any failure: roll back via _ckipper_account_sync_rollback_target, +# print the partial manifest's path, and return non-zero. +# +# Also (re)populates _CKIPPER_SYNC_CTX with the four name/dir args so the function +# is callable on its own (engine_test.bats invokes it directly without +# going through run_one_target). +# +# Reads the change set on stdin: TSV rows of "\t\t\t" +# where status is one of "new" | "overwrite" (unchanged rows are filtered upstream). +# +# Args: $1 — src dir; $2 — dst dir; $3 — src name; $4 — dst name. +# Returns: 0 on success; 1 if any apply failed (after rollback completed). +_ckipper_account_sync_apply_target() { + local src_dir="$1" dst_dir="$2" src_name="$3" dst_name="$4" + local backup_dir + backup_dir=$(_ckipper_account_sync_backup_create "$dst_dir" "$src_name") + _ckipper_account_sync_manifest_init "$backup_dir" "$src_name" "$dst_name" + _CKIPPER_SYNC_CTX[src_dir]="$src_dir"; _CKIPPER_SYNC_CTX[dst_dir]="$dst_dir" + _CKIPPER_SYNC_CTX[src_name]="$src_name"; _CKIPPER_SYNC_CTX[dst_name]="$dst_name" + _CKIPPER_SYNC_CTX[backup_dir]="$backup_dir" + local rc=0 type id display change_status + while IFS=$'\t' read -r type id display change_status; do + [[ -z "$type" || "$change_status" == "unchanged" ]] && continue + _ckipper_account_sync_apply_one "$type" "$id" || { rc=1; break; } + done + if (( rc != 0 )); then + _ckipper_account_sync_rollback_target "$backup_dir" "$dst_dir" >&2 + echo "Rolled back. Backup preserved at: $backup_dir" >&2 + fi + return $rc +} + +# Apply one change set entry. Bridges between the strategy contract and +# the manifest schema. Types in _CKIPPER_SYNC_TYPE_USES_NAMES use names; +# everything else uses dirs. +# +# Reads src_dir/dst_dir/src_name/dst_name/backup_dir from _CKIPPER_SYNC_CTX (set +# by apply_target). Keeping these in context drops the parameter count +# from 8 to 3, satisfying the .claude/rules/code-style.md cap. +# +# Manifest is appended BEFORE the apply call, not after. If the apply +# crashes mid-write (backed-up the file, started writing, errored), the +# manifest still contains the entry so rollback can restore from the +# backup dir. Without this, mid-write failures leave the destination +# half-written with no manifest record (rollback would skip the file). +# +# `op` is derived from whether the live file exists at apply time, NOT +# from change_status. change_status="new" can fire when a sub-key is +# absent from a file that already exists (e.g. adding one MCP server to a +# .claude.json that already has others); recording op=create there would +# make rollback rm-rf the whole file, destroying unrelated data. +# +# Args: $1 — type; $2 — id. +# Returns: 0 on success; non-zero on apply failure. +_ckipper_account_sync_apply_one() { + local type="$1" id="$2" + local apply_fn; apply_fn=$(_ckipper_account_sync_strategy_fn "$type" apply) + local arg_a="${_CKIPPER_SYNC_CTX[src_dir]}" arg_b="${_CKIPPER_SYNC_CTX[dst_dir]}" + (( ${+_CKIPPER_SYNC_TYPE_USES_NAMES[$type]} )) && { arg_a="${_CKIPPER_SYNC_CTX[src_name]}"; arg_b="${_CKIPPER_SYNC_CTX[dst_name]}"; } + local rel; rel=$(_ckipper_account_sync_manifest_rel "$type" "$id") + local live; live=$(_ckipper_account_sync_live_path "$type" "${_CKIPPER_SYNC_CTX[dst_dir]}" "$rel") + local op="overwrite"; [[ ! -e "$live" ]] && op="create" + _ckipper_account_sync_manifest_append "${_CKIPPER_SYNC_CTX[backup_dir]}" "$rel" "$op" "$type" "$id" + "$apply_fn" "$arg_a" "$arg_b" "$id" "${_CKIPPER_SYNC_CTX[backup_dir]}" +} + +# Compute the manifest's path field for a given (type, id). The relpath +# is what _ckipper_account_sync_rollback_one operates on. +# +# Args: $1 — type; $2 — id. +# Returns: 0; prints relpath. +_ckipper_account_sync_manifest_rel() { + local type="$1" id="$2" + case "$type" in + mcp) echo ".claude.json" ;; + settings|statusline) echo "settings.json" ;; + prefs) echo "accounts.json" ;; + *) echo "$id" ;; + esac +} + +# Build a TSV of (type, id, summary) by calling each strategy's _summary +# function for every changeset row. Reads the changeset on stdin; writes +# to stdout. Skips unchanged rows so the picker only sees actionable items. +# +# Args: $1 — src_dir; $2 — dst_dir; $3 — src_name; $4 — dst_name. +# Returns: 0 always. +_ckipper_account_sync_build_summaries() { + local src_dir="$1" dst_dir="$2" src_name="$3" dst_name="$4" + local type id display change_status summary_fn arg_a arg_b summary + while IFS=$'\t' read -r type id display change_status; do + [[ -z "$type" || "$change_status" == "unchanged" ]] && continue + summary_fn=$(_ckipper_account_sync_strategy_fn "$type" summary) + arg_a="$src_dir"; arg_b="$dst_dir" + (( ${+_CKIPPER_SYNC_TYPE_USES_NAMES[$type]} )) && { arg_a="$src_name"; arg_b="$dst_name"; } + summary=$("$summary_fn" "$arg_a" "$arg_b" "$id") + echo "$type"$'\t'"$id"$'\t'"$summary" + done +} diff --git a/lib/account/sync/engine_test.bats b/lib/account/sync/engine_test.bats new file mode 100644 index 0000000..a660366 --- /dev/null +++ b/lib/account/sync/engine_test.bats @@ -0,0 +1,255 @@ +#!/usr/bin/env bats +# Unit tests for lib/account/sync/engine.zsh skeleton. + +load "${BATS_TEST_DIRNAME}/../../../tests/lib/test-helper.bash" + +setup() { setup_isolated_env; } +teardown() { teardown_isolated_env; } + +run_in_zsh() { + run env CKIPPER_DIR="$CKIPPER_DIR" \ + zsh -c "source \"$REPO_ROOT/lib/account/sync/_shared.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/registry.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/engine.zsh\"; $*" +} + +@test "_ckipper_account_sync_strategy_fn returns the expected naming convention" { + run_in_zsh 'echo "$(_ckipper_account_sync_strategy_fn mcp enumerate)"' + [ "$status" -eq 0 ] + [[ "$output" == "_ckipper_account_sync_mcp_enumerate" ]] +} + +@test "_ckipper_account_sync_strategy_fn handles hyphenated type ids" { + run_in_zsh 'echo "$(_ckipper_account_sync_strategy_fn claude-md compare)"' + [ "$status" -eq 0 ] + [[ "$output" == "_ckipper_account_sync_claude-md_compare" ]] +} + +@test "engine sources without errors" { + run_in_zsh 'echo OK' + [ "$status" -eq 0 ] + [[ "$output" == *"OK"* ]] +} + +@test "_ckipper_account_sync_assert_dst_idle returns 0 when no claude running" { + # The pgrep stub returns no matches by default in the test env. + run env CKIPPER_DIR="$CKIPPER_DIR" TMP_HOME="$TMP_HOME" PATH="$PATH" \ + zsh -c "source \"$REPO_ROOT/lib/core/keychain.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/engine.zsh\"; \ + _ckipper_account_sync_assert_dst_idle '$TMP_HOME/dst' false && echo OK" + [[ "$output" == *"OK"* ]] +} + +@test "_ckipper_account_sync_assert_dst_idle returns 1 when claude is running on dst" { + # Stage a fake pgrep stub that prints a process referencing dst. + local fake_pgrep="$TMP_HOME/bin/pgrep" + mkdir -p "$TMP_HOME/bin" + cat > "$fake_pgrep" < "$fake_pgrep" < "$src/.claude.json" + echo '{"mcpServers":{}}' > "$dst/.claude.json" + run env HOME="$HOME" CKIPPER_DIR="$CKIPPER_DIR" CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + TMP_HOME="$TMP_HOME" \ + zsh -c "source \"$REPO_ROOT/lib/account/sync/registry.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/engine.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/_shared.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/strategies/structured.zsh\"; \ + _ckipper_account_sync_build_change_set '$src' '$dst' src dst mcp" + [[ "$output" == *"mcp"$'\t'"github"$'\t'* ]] + [[ "$output" == *"new"* ]] +} + +@test "_ckipper_account_sync_build_summaries dispatches each type's _summary fn" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo '{"mcpServers":{"github":{"command":"new"}}}' > "$src/.claude.json" + echo '{"mcpServers":{"github":{"command":"old"}}}' > "$dst/.claude.json" + run env HOME="$HOME" CKIPPER_DIR="$CKIPPER_DIR" CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + TMP_HOME="$TMP_HOME" \ + zsh -c "source \"$REPO_ROOT/lib/account/sync/registry.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/engine.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/_shared.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/strategies/structured.zsh\"; \ + printf 'mcp\tgithub\tgithub\toverwrite\n' \ + | _ckipper_account_sync_build_summaries '$src' '$dst' src dst" + [[ "$output" == *"mcp"$'\t'"github"$'\t'*"overwrite"* ]] + [[ "$output" == *"server config changed"* ]] +} + +@test "_ckipper_account_sync_build_summaries skips unchanged rows" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo '{"mcpServers":{}}' > "$src/.claude.json" + echo '{"mcpServers":{}}' > "$dst/.claude.json" + run env HOME="$HOME" CKIPPER_DIR="$CKIPPER_DIR" CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + TMP_HOME="$TMP_HOME" \ + zsh -c "source \"$REPO_ROOT/lib/account/sync/registry.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/engine.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/_shared.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/strategies/structured.zsh\"; \ + printf 'mcp\tunchanged-srv\tunchanged-srv\tunchanged\n' \ + | _ckipper_account_sync_build_summaries '$src' '$dst' src dst | wc -l | tr -d ' '" + [[ "$output" == *"0"* ]] +} + +@test "_ckipper_account_sync_apply_target rolls back via manifest after mid-write failure" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo '{"mcpServers":{"github":{"command":"new"}}}' > "$src/.claude.json" + echo '{"mcpServers":{"github":{"command":"old"}},"keep":"this"}' > "$dst/.claude.json" + # Stage a fake apply that backs up + then fails — simulates a mid-write crash. + run env HOME="$HOME" CKIPPER_DIR="$CKIPPER_DIR" CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + TMP_HOME="$TMP_HOME" \ + zsh -c "source \"$REPO_ROOT/lib/account/sync/registry.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/backup.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/engine.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/_shared.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/strategies/structured.zsh\"; \ + # Override mcp_apply to simulate a crashing strategy. + _ckipper_account_sync_mcp_apply() { + _ckipper_account_sync_backup_file \"\$4\" \"\$2/.claude.json\" \".claude.json\" + echo 'corrupt-mid-write' > \"\$2/.claude.json\" + return 1 + } + printf 'mcp\tgithub\tgithub\toverwrite\n' \ + | _ckipper_account_sync_apply_target '$src' '$dst' src dst + # Rollback should have restored the original. + jq -r '.mcpServers.github.command' '$dst/.claude.json' 2>&1 + jq -r '.keep' '$dst/.claude.json' 2>&1" + [[ "$output" == *"old"* ]] + [[ "$output" == *"this"* ]] +} + +@test "_ckipper_account_sync_apply_target writes changes and records manifest" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo '{"mcpServers":{"github":{"command":"x"}}}' > "$src/.claude.json" + echo '{"mcpServers":{}}' > "$dst/.claude.json" + run env HOME="$HOME" CKIPPER_DIR="$CKIPPER_DIR" CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + TMP_HOME="$TMP_HOME" \ + zsh -c "source \"$REPO_ROOT/lib/account/sync/registry.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/backup.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/engine.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/_shared.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/strategies/structured.zsh\"; \ + printf 'mcp\tgithub\tgithub\tnew\n' \ + | _ckipper_account_sync_apply_target '$src' '$dst' src dst + jq '.mcpServers.github.command' '$dst/.claude.json' + ls '$dst/.ckipper-sync-backups'/*-from-src/.ckipper-sync-manifest.json" + [[ "$output" == *'"x"'* ]] + [[ "$output" == *".ckipper-sync-manifest.json"* ]] +} + +# Bug E: change_status="new" (the SUB-KEY is new) used to be conflated with +# op="create" (the FILE didn't exist). When a user added a new MCP server +# to a destination that already had .claude.json with unrelated servers, +# the manifest recorded op=create, and rollback rm-rf'd the entire file — +# wiping unrelated servers. The fix derives op from file existence at apply +# time, so this scenario records op=overwrite and rollback restores from +# backup instead of deleting. +@test "apply_one records op=overwrite when destination file exists pre-apply (Bug E)" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo '{"mcpServers":{"github":{"command":"x"}}}' > "$src/.claude.json" + echo '{"mcpServers":{"existing":{"command":"y"}},"keep":"this"}' > "$dst/.claude.json" + run env HOME="$HOME" CKIPPER_DIR="$CKIPPER_DIR" CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + TMP_HOME="$TMP_HOME" \ + zsh -c "source \"$REPO_ROOT/lib/account/sync/registry.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/backup.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/engine.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/_shared.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/strategies/structured.zsh\"; \ + # change_status='new' but the FILE exists with unrelated content. + printf 'mcp\tgithub\tgithub\tnew\n' \ + | _ckipper_account_sync_apply_target '$src' '$dst' src dst + # The manifest entry must record op=overwrite, NOT create. + jq -r '.files[0].operation' \"\$(ls -d '$dst/.ckipper-sync-backups'/*-from-src)/.ckipper-sync-manifest.json\"" + [[ "$output" == *"overwrite"* ]] + [[ "$output" != *"create"* ]] +} + +# Bug E (companion): ensure op=create still fires when the file is genuinely absent. +@test "apply_one records op=create when destination file is absent pre-apply (Bug E)" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo '{"mcpServers":{"github":{"command":"x"}}}' > "$src/.claude.json" + # No .claude.json on dst — file genuinely doesn't exist. + run env HOME="$HOME" CKIPPER_DIR="$CKIPPER_DIR" CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + TMP_HOME="$TMP_HOME" \ + zsh -c "source \"$REPO_ROOT/lib/account/sync/registry.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/backup.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/engine.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/_shared.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/strategies/structured.zsh\"; \ + printf 'mcp\tgithub\tgithub\tnew\n' \ + | _ckipper_account_sync_apply_target '$src' '$dst' src dst + jq -r '.files[0].operation' \"\$(ls -d '$dst/.ckipper-sync-backups'/*-from-src)/.ckipper-sync-manifest.json\"" + [[ "$output" == *"create"* ]] +} + +# Bug E end-to-end: rollback after a "new sub-key into existing file" sync +# must preserve the destination file's other content. +@test "rollback after new-sub-key sync preserves unrelated dst data (Bug E)" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo '{"mcpServers":{"github":{"command":"x"}}}' > "$src/.claude.json" + echo '{"mcpServers":{"existing":{"command":"y"}},"keep":"this"}' > "$dst/.claude.json" + run env HOME="$HOME" CKIPPER_DIR="$CKIPPER_DIR" CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + TMP_HOME="$TMP_HOME" \ + zsh -c "source \"$REPO_ROOT/lib/account/sync/registry.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/backup.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/engine.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/_shared.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/strategies/structured.zsh\"; \ + printf 'mcp\tgithub\tgithub\tnew\n' \ + | _ckipper_account_sync_apply_target '$src' '$dst' src dst + # Sync succeeded; now run undo via rollback_target. + backup_dir=\$(ls -d '$dst/.ckipper-sync-backups'/*-from-src) + _ckipper_account_sync_rollback_target \"\$backup_dir\" '$dst' + # The file MUST still exist with the original content intact. + [[ -e '$dst/.claude.json' ]] && echo FILE_PRESENT || echo FILE_DELETED + jq -r '.keep' '$dst/.claude.json' + jq -r '.mcpServers | keys | sort | join(\",\")' '$dst/.claude.json'" + [[ "$output" == *"FILE_PRESENT"* ]] + [[ "$output" == *"this"* ]] + [[ "$output" == *"existing"* ]] + [[ "$output" != *"github"* ]] +} diff --git a/lib/account/sync/interactive.zsh b/lib/account/sync/interactive.zsh new file mode 100644 index 0000000..adb69b5 --- /dev/null +++ b/lib/account/sync/interactive.zsh @@ -0,0 +1,95 @@ +#!/usr/bin/env zsh +# Interactive wizard for `ckipper account sync` — gum-driven pickers for +# source account, target accounts (multi-select), and types (multi-select). +# +# All pickers honor CKIPPER_NO_GUM via lib/core/prompt.zsh helpers +# (_core_prompt_choose etc.) so non-TTY callers and tests have a fallback. + +# List every registered account name (sorted by registry insertion order). +# +# Returns: 0; prints account names, one per line. +_ckipper_account_sync_list_accounts() { + [[ ! -f "$CKIPPER_REGISTRY" ]] && return 0 + jq -r '.accounts | keys[]?' "$CKIPPER_REGISTRY" 2>/dev/null +} + +# List every registered account name EXCEPT the given one. +# +# Args: $1 — account name to exclude. +# Returns: 0; prints filtered list. +_ckipper_account_sync_list_accounts_except() { + local exclude="$1" + _ckipper_account_sync_list_accounts | grep -vxF "$exclude" 2>/dev/null +} + +# Prompt the user to pick the source account. +# +# Returns: 0 with chosen name on stdout; 1 if user cancels or no accounts +# are registered. +_ckipper_account_sync_pick_source() { + local -a accounts + accounts=( ${(f)"$(_ckipper_account_sync_list_accounts)"} ) + if (( ${#accounts} == 0 )); then + echo "No accounts registered. Run: ckipper account add " >&2 + return 1 + fi + _core_prompt_choose "Sync FROM which account?" "${accounts[@]}" +} + +# Prompt to multi-select target accounts. With gum, uses --no-limit. +# Without gum, falls back to a comma-separated input prompt. +# +# Args: $1 — source account name (excluded from candidates). +# Returns: 0 with chosen names (one per line) on stdout; 1 if user cancels. +_ckipper_account_sync_pick_targets() { + local source="$1" + local -a candidates + candidates=( ${(f)"$(_ckipper_account_sync_list_accounts_except "$source")"} ) + if (( ${#candidates} == 0 )); then + echo "No other accounts to sync to." >&2 + return 1 + fi + if _core_prompt_use_gum; then + printf '%s\n' "${candidates[@]}" | gum choose --no-limit --header "Sync TO which accounts? (space to multi-select)" + return $? + fi + _ckipper_account_sync_pick_targets_fallback "${candidates[@]}" +} + +# Pure-zsh fallback: comma-separated input, validated against candidates. +# +# Args: $@ — candidate account names. +# Returns: 0; prints chosen names. +_ckipper_account_sync_pick_targets_fallback() { + echo "Available targets: $*" >&2 + local input + input=$(_core_prompt_input "Enter comma-separated targets" "") + local name + for name in ${(s:,:)input}; do + echo "$name" + done +} + +# Prompt to multi-select sync types from the registry. +# +# Returns: 0; prints chosen type ids. +_ckipper_account_sync_pick_types() { + local -a labels + local t + for t in "${(@k)_CKIPPER_SYNC_TYPE_LABEL}"; do + labels+=("$t — ${_CKIPPER_SYNC_TYPE_LABEL[$t]}") + done + if _core_prompt_use_gum; then + printf '%s\n' "${labels[@]}" \ + | gum choose --no-limit --header "Pick types to sync (space to multi-select)" \ + | awk '{print $1}' + return $? + fi + echo "Type tokens: ${(@k)_CKIPPER_SYNC_TYPE_LABEL}" >&2 + local input + input=$(_core_prompt_input "Enter comma-separated types" "") + local name + for name in ${(s:,:)input}; do + echo "$name" + done +} diff --git a/lib/account/sync/interactive_test.bats b/lib/account/sync/interactive_test.bats new file mode 100644 index 0000000..1146059 --- /dev/null +++ b/lib/account/sync/interactive_test.bats @@ -0,0 +1,36 @@ +#!/usr/bin/env bats +# Unit tests for lib/account/sync/interactive.zsh — gum pickers. + +load "${BATS_TEST_DIRNAME}/../../../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env + cat > "$CKIPPER_REGISTRY" <_diff function. + +readonly _CKIPPER_SYNC_BADGE_NEW="[+]" +readonly _CKIPPER_SYNC_BADGE_OVERWRITE="[~]" +readonly _CKIPPER_SYNC_DIVIDER_WIDTH=45 +readonly _CKIPPER_SYNC_DISPLAY_COL_WIDTH=26 + +# _CKIPPER_SYNC_CTX is declared in lib/account/sync/_shared.zsh. + +# Print the divider line for the summary table. +# +# Returns: 0 always. +_ckipper_account_sync_print_divider() { + printf '%*s\n' "$_CKIPPER_SYNC_DIVIDER_WIDTH" '' | tr ' ' '─' +} + +# Print the per-item line with badge + display + summary. +# +# Args: $1 — change status; $2 — display; $3 — summary. +# Returns: 0; suppresses unchanged rows. +_ckipper_account_sync_render_row() { + local cmp_status="$1" display="$2" summary="$3" + local w="$_CKIPPER_SYNC_DISPLAY_COL_WIDTH" + case "$cmp_status" in + new) printf ' %s %-*s (%s)\n' "$_CKIPPER_SYNC_BADGE_NEW" "$w" "$display" "${summary:-new}" ;; + overwrite) printf ' %s %-*s (%s)\n' "$_CKIPPER_SYNC_BADGE_OVERWRITE" "$w" "$display" "${summary:-overwrite}" ;; + unchanged) ;; + esac +} + +# Render the summary table. Reads a change-set on stdin (TSV rows), groups +# by type, and prints the preview layout to stdout. The 4th positional arg +# is a path to a precomputed summaries file (one "type\tid\tsummary" per +# line) — built by the engine before this is called so we don't re-call +# every strategy's summary function inside the renderer. +# +# Args: $1 — src name; $2 — dst name; $3 — backup_dir path; $4 — summaries file. +# Returns: 0 always. +_ckipper_account_sync_render_summary() { + local src_name="$1" dst_name="$2" backup_dir="$3" summaries="$4" + local -A summary_map + if [[ -f "$summaries" ]]; then + local s_type s_id s_text + while IFS=$'\t' read -r s_type s_id s_text; do + summary_map["${s_type}"$'\t'"${s_id}"]="$s_text" + done < "$summaries" + fi + echo "" + echo "Sync $src_name → $dst_name" + _ckipper_account_sync_print_divider + local current_type="" type id display change_status + while IFS=$'\t' read -r type id display change_status; do + [[ -z "$type" ]] && continue + if [[ "$type" != "$current_type" ]]; then + echo " ${_CKIPPER_SYNC_TYPE_LABEL[$type]:-$type}" + current_type="$type" + fi + _ckipper_account_sync_render_row "$change_status" "$display" "${summary_map["${type}"$'\t'"${id}"]}" + done + _ckipper_account_sync_print_divider + echo "Backup → $backup_dir" +} + +# Count totals from the change-set on stdin. +# +# Returns: 0; prints " " on a single line. +_ckipper_account_sync_count_changes() { + awk -F'\t' ' + $4 == "new" { n++; total++ } + $4 == "overwrite" { o++; total++ } + END { printf "%d %d %d\n", total+0, n+0, o+0 } + ' +} + +# Filter a change-set on stdin to only [~] (overwrite) rows. The drill- +# down picker only makes sense for overwrites (new items have no +# destination value to diff against). +# +# Returns: 0; prints "\t\t" per line for overwrites. +_ckipper_account_sync_drill_down_items() { + awk -F'\t' '$4 == "overwrite" { print $1 "\t" $2 "\t" $3 }' +} + +# Open the drill-down loop (gum-driven). User picks an item; we print its +# full diff via the strategy's _diff function. Loops until the user +# picks "Back" or hits EOF. +# +# Reads _CKIPPER_SYNC_CTX[items] for the items-file path; drill_down_show reads +# the rest of the per-target dirs/names directly. +# +# Returns: 0 always. +_ckipper_account_sync_drill_down_loop() { + local items_file="${_CKIPPER_SYNC_CTX[items]}" + [[ ! -s "$items_file" ]] && { echo "No overwrites to drill into."; return 0; } + # Hoist `local choice` and `local _ack` out of the loop: re-declaring + # `local var` (no =value) on a subsequent iteration causes zsh to + # print `var='prior_value'`, which would surface as terminal noise. + local choice="" _ack="" + while true; do + choice=$(_ckipper_account_sync_drill_down_pick "$items_file") || return 0 + [[ "$choice" == "Back" || -z "$choice" ]] && return 0 + _ckipper_account_sync_drill_down_show "$choice" + echo "" + echo "(Press enter to return to picker)" + read -r _ack + done +} + +# Pick one drill-down item. Uses gum if available; otherwise prints +# numbered list. The label encodes the type so the show function can +# look up the id from the items file. +# +# Args: $1 — items file (TSV: type\tid\tdisplay). +# Returns: gum exit; prints chosen label or "Back". +_ckipper_account_sync_drill_down_pick() { + local items_file="$1" + local -a labels=("Back") + local type id display + while IFS=$'\t' read -r type id display; do + labels+=("[$type] $display") + done < "$items_file" + _core_prompt_choose "View diff for which item?" "${labels[@]}" +} + +# Render the diff for one selected item. Looks the row up by (type, display) +# in the items file to recover the original id (which may differ from +# display, e.g. files-flat: id=agents/foo.md, display=foo.md). +# +# Reads items file path and src/dst dirs/names from _CKIPPER_SYNC_CTX. +# +# Args: $1 — picker choice (e.g. "[mcp] github"). +# Returns: 0; prints the strategy's diff output. +_ckipper_account_sync_drill_down_show() { + local choice="$1" + local items_file="${_CKIPPER_SYNC_CTX[items]}" + local type="${choice#\[}"; type="${type%%]*}" + local display="${choice#*] }" + local id; id=$(_ckipper_account_sync_drill_down_resolve_id "$items_file" "$type" "$display") + local diff_fn; diff_fn=$(_ckipper_account_sync_strategy_fn "$type" diff) + local arg_a="${_CKIPPER_SYNC_CTX[src_dir]}" arg_b="${_CKIPPER_SYNC_CTX[dst_dir]}" + (( ${+_CKIPPER_SYNC_TYPE_USES_NAMES[$type]} )) && { arg_a="${_CKIPPER_SYNC_CTX[src_name]}"; arg_b="${_CKIPPER_SYNC_CTX[dst_name]}"; } + "$diff_fn" "$arg_a" "$arg_b" "$id" +} + +# Recover the original id for a (type, display) pair by looking it up +# in the items file. Falls back to display when no row matches (defensive +# default — keeps drill-down working even if the items file is stale). +# +# Args: $1 — items file; $2 — type; $3 — display. +# Returns: 0; prints the id (or display on miss). +_ckipper_account_sync_drill_down_resolve_id() { + local items_file="$1" type="$2" display="$3" + local resolved + resolved=$(awk -F'\t' -v t="$type" -v d="$display" \ + '$1 == t && $3 == d { print $2; exit }' "$items_file") + echo "${resolved:-$display}" +} diff --git a/lib/account/sync/preview_test.bats b/lib/account/sync/preview_test.bats new file mode 100644 index 0000000..0fc4748 --- /dev/null +++ b/lib/account/sync/preview_test.bats @@ -0,0 +1,77 @@ +#!/usr/bin/env bats +# Unit tests for lib/account/sync/preview.zsh — summary table + drill-down. + +load "${BATS_TEST_DIRNAME}/../../../tests/lib/test-helper.bash" + +setup() { setup_isolated_env; } +teardown() { teardown_isolated_env; } + +run_in_zsh() { + run env HOME="$HOME" CKIPPER_DIR="$CKIPPER_DIR" CKIPPER_NO_GUM=1 TMP_HOME="$TMP_HOME" \ + zsh -c "source \"$REPO_ROOT/lib/core/style.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/_shared.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/registry.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/preview.zsh\"; $*" +} + +@test "_ckipper_account_sync_render_summary groups by type with status badges" { + run_in_zsh ' + printf "mcp\tgithub\tgithub\tnew\n" >/tmp/cs.$$ + printf "mcp\tvibma\tvibma\toverwrite\n" >>/tmp/cs.$$ + printf "claude-md\tCLAUDE.md\tCLAUDE.md\tnew\n" >>/tmp/cs.$$ + cat /tmp/cs.$$ \ + | _ckipper_account_sync_render_summary src dst /tmp/backup-dir-stub /tmp/no-summaries + rm -f /tmp/cs.$$' + [[ "$output" == *"MCP servers"* ]] + [[ "$output" == *"[+]"* ]] + [[ "$output" == *"github"* ]] + [[ "$output" == *"[~]"* ]] + [[ "$output" == *"vibma"* ]] + [[ "$output" == *"CLAUDE.md (user memory)"* ]] + [[ "$output" == *"Backup →"* ]] +} + +@test "_ckipper_account_sync_render_summary suppresses unchanged rows by default" { + run_in_zsh ' + printf "mcp\tgithub\tgithub\tnew\n" >/tmp/cs.$$ + printf "mcp\tunchanged-srv\tunchanged-srv\tunchanged\n" >>/tmp/cs.$$ + cat /tmp/cs.$$ \ + | _ckipper_account_sync_render_summary src dst /tmp/backup-dir-stub /tmp/no-summaries + rm -f /tmp/cs.$$' + [[ "$output" != *"unchanged-srv"* ]] +} + +@test "_ckipper_account_sync_count_changes returns total + new + overwrite" { + run_in_zsh ' + printf "mcp\tgithub\tgithub\tnew\n" >/tmp/cs.$$ + printf "mcp\tvibma\tvibma\toverwrite\n" >>/tmp/cs.$$ + printf "settings\tmodel\tmodel\tnew\n" >>/tmp/cs.$$ + cat /tmp/cs.$$ | _ckipper_account_sync_count_changes + rm -f /tmp/cs.$$' + [[ "$output" == *"3 2 1"* ]] +} + +@test "_ckipper_account_sync_drill_down_items emits only [~] (overwrite) rows" { + run_in_zsh ' + printf "mcp\tgithub\tgithub\tnew\n" >/tmp/cs.$$ + printf "mcp\tvibma\tvibma\toverwrite\n" >>/tmp/cs.$$ + printf "settings\tmodel\tmodel\toverwrite\n" >>/tmp/cs.$$ + cat /tmp/cs.$$ | _ckipper_account_sync_drill_down_items + rm -f /tmp/cs.$$' + [[ "$output" != *"github"* ]] + [[ "$output" == *"vibma"* ]] + [[ "$output" == *"model"* ]] +} + +@test "_ckipper_account_sync_drill_down_resolve_id recovers id from (type, display)" { + run_in_zsh ' + printf "agents\tagents/foo.md\tfoo.md\n" >/tmp/items.$$ + printf "mcp\tgithub\tgithub\n" >>/tmp/items.$$ + result=$(_ckipper_account_sync_drill_down_resolve_id /tmp/items.$$ agents foo.md) + echo "agents:$result" + result=$(_ckipper_account_sync_drill_down_resolve_id /tmp/items.$$ mcp github) + echo "mcp:$result" + rm -f /tmp/items.$$' + [[ "$output" == *"agents:agents/foo.md"* ]] + [[ "$output" == *"mcp:github"* ]] +} diff --git a/lib/account/sync/registry.zsh b/lib/account/sync/registry.zsh new file mode 100644 index 0000000..6c4c4fe --- /dev/null +++ b/lib/account/sync/registry.zsh @@ -0,0 +1,123 @@ +#!/usr/bin/env zsh +# Declarative registry of syncable types for `ckipper account sync`. +# +# This is the single source of truth for "what can be synced." Adding a new +# syncable type is: append to all three parallel arrays AND implement the +# strategy contract (see lib/account/sync/engine.zsh for the contract). +# +# Mirrors the parallel-array idiom used by lib/core/schema.zsh. + +# Human-readable label, shown in pickers and the summary table. +typeset -gA _CKIPPER_SYNC_TYPE_LABEL=( + [mcp]="MCP servers" + [settings]="Claude settings" + [claude-md]="CLAUDE.md (user memory)" + [agents]="Sub-agents" + [commands]="Custom slash commands" + [output-styles]="Output styles" + [skills]="Skills" + [statusline]="Status line" + [hooks]="User hooks" + [prefs]="Account preferences" +) + +# Implementation kind. Drives which strategy module each type's functions +# live in. Allowed values: "structured" (JSON-key merges), "files-flat" +# (flat .md files), "files-dir" (per-directory items), "special" (custom +# logic — statusline split-detection, hooks install-allowlist filter). +typeset -gA _CKIPPER_SYNC_TYPE_KIND=( + [mcp]=structured [settings]=structured [prefs]=structured + [claude-md]=files-flat [agents]=files-flat + [commands]=files-flat [output-styles]=files-flat + [skills]=files-dir + [statusline]=special [hooks]=special +) + +# Types whose strategy functions take account NAMES (not dirs) as their +# first two positional args. Currently only `prefs`, which operates on the +# registry (CKIPPER_REGISTRY) keyed by account name. Engine and preview +# consult this set to pick the right (a, b) pair when invoking strategy +# functions; without it they would have to hard-code "prefs" semantics. +typeset -gA _CKIPPER_SYNC_TYPE_USES_NAMES=( + [prefs]=1 +) + +# Space-separated list of bundles the type belongs to. Bundles are aliases +# users may pass to --include / --exclude (see _ckipper_account_sync_resolve_*). +typeset -gA _CKIPPER_SYNC_TYPE_BUNDLES=( + [mcp]="all customizations claude-config" + [settings]="all customizations claude-config" + [claude-md]="all customizations" + [agents]="all customizations" + [commands]="all customizations" + [output-styles]="all customizations" + [skills]="all customizations" + [statusline]="all customizations" + [hooks]="all customizations claude-config" + [prefs]="all preferences" +) + +# Known bundle aliases. Asserted disjoint from type ids by registry_test. +typeset -gra _CKIPPER_SYNC_BUNDLE_ALIASES=(all customizations claude-config preferences) + +# Return 0 if $1 is a known sync type id; 1 otherwise. +# +# Args: $1 — candidate type id. +# Returns: 0 if known; 1 otherwise. +_ckipper_account_sync_is_known_type() { + (( ${+_CKIPPER_SYNC_TYPE_LABEL[$1]} )) +} + +# Return 0 if $1 is a known bundle alias; 1 otherwise. +# +# Args: $1 — candidate bundle alias. +# Returns: 0 if known; 1 otherwise. +_ckipper_account_sync_is_known_bundle() { + local b="$1" alias + for alias in "${_CKIPPER_SYNC_BUNDLE_ALIASES[@]}"; do + [[ "$alias" == "$b" ]] && return 0 + done + return 1 +} + +# Expand a bundle alias to its constituent type ids (one per line). +# A non-bundle token is echoed back unchanged (so callers can resolve a +# mixed list uniformly). +# +# Args: $1 — bundle alias OR raw type id. +# Returns: 0 always; prints expanded list to stdout, one type id per line. +_ckipper_account_sync_resolve_bundle() { + local token="$1" t + if ! _ckipper_account_sync_is_known_bundle "$token"; then + echo "$token" + return 0 + fi + for t in "${(@k)_CKIPPER_SYNC_TYPE_BUNDLES}"; do + [[ " ${_CKIPPER_SYNC_TYPE_BUNDLES[$t]} " == *" $token "* ]] && echo "$t" + done +} + +# Resolve a comma-separated --include / --exclude pair into a deduplicated +# sorted list of type ids. `include` may mix bundle aliases and bare type +# ids; bundles are expanded first, then `exclude` (also mixed) is subtracted. +# +# Args: $1 — comma-separated include list; $2 — comma-separated exclude list. +# Returns: 0 always; prints the final type ids one per line, lexically sorted. +_ckipper_account_sync_resolve_includes() { + local include="$1" exclude="$2" + local -A keep + local token expanded + for token in ${(s:,:)include}; do + [[ -z "$token" ]] && continue + for expanded in $(_ckipper_account_sync_resolve_bundle "$token"); do + keep[$expanded]=1 + done + done + for token in ${(s:,:)exclude}; do + [[ -z "$token" ]] && continue + for expanded in $(_ckipper_account_sync_resolve_bundle "$token"); do + unset 'keep['"$expanded"']' + done + done + print -l ${(ko)keep} +} diff --git a/lib/account/sync/registry_test.bats b/lib/account/sync/registry_test.bats new file mode 100644 index 0000000..b449f21 --- /dev/null +++ b/lib/account/sync/registry_test.bats @@ -0,0 +1,119 @@ +#!/usr/bin/env bats +# Unit tests for lib/account/sync/registry.zsh — declarative type registry. + +load "${BATS_TEST_DIRNAME}/../../../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env +} + +teardown() { + teardown_isolated_env +} + +# Helper: source registry.zsh in a zsh subshell and run an expression. +run_in_zsh() { + run env CKIPPER_DIR="$CKIPPER_DIR" \ + zsh -c "source \"$REPO_ROOT/lib/account/sync/registry.zsh\"; $*" +} + +@test "registry declares all 10 sync types" { + run_in_zsh 'echo ${(k)_CKIPPER_SYNC_TYPE_LABEL} | tr " " "\n" | sort | tr "\n" ","' + [ "$status" -eq 0 ] + [[ "$output" == *"agents,claude-md,commands,hooks,mcp,output-styles,prefs,settings,skills,statusline,"* ]] +} + +@test "every type has a label" { + run_in_zsh ' + for t in mcp settings claude-md agents commands output-styles skills statusline hooks prefs; do + [[ -n "${_CKIPPER_SYNC_TYPE_LABEL[$t]}" ]] || { echo "missing label for $t"; exit 1; } + done + echo OK' + [ "$status" -eq 0 ] + [[ "$output" == *"OK"* ]] +} + +@test "every type has a kind in {structured, files-flat, files-dir, special}" { + run_in_zsh ' + for t in mcp settings claude-md agents commands output-styles skills statusline hooks prefs; do + kind="${_CKIPPER_SYNC_TYPE_KIND[$t]}" + case "$kind" in + structured|files-flat|files-dir|special) ;; + *) echo "bad kind $kind for $t"; exit 1 ;; + esac + done + echo OK' + [ "$status" -eq 0 ] + [[ "$output" == *"OK"* ]] +} + +@test "every type has at least one bundle membership" { + run_in_zsh ' + for t in mcp settings claude-md agents commands output-styles skills statusline hooks prefs; do + [[ -n "${_CKIPPER_SYNC_TYPE_BUNDLES[$t]}" ]] || { echo "missing bundles for $t"; exit 1; } + done + echo OK' + [ "$status" -eq 0 ] + [[ "$output" == *"OK"* ]] +} + +@test "_ckipper_account_sync_resolve_bundle expands all to all 10 types" { + run_in_zsh '_ckipper_account_sync_resolve_bundle all | sort | tr "\n" ","' + [ "$status" -eq 0 ] + [[ "$output" == "agents,claude-md,commands,hooks,mcp,output-styles,prefs,settings,skills,statusline," ]] +} + +@test "_ckipper_account_sync_resolve_bundle expands customizations" { + run_in_zsh '_ckipper_account_sync_resolve_bundle customizations | sort | tr "\n" ","' + [ "$status" -eq 0 ] + [[ "$output" == "agents,claude-md,commands,hooks,mcp,output-styles,settings,skills,statusline," ]] +} + +@test "_ckipper_account_sync_resolve_bundle preferences = prefs" { + run_in_zsh '_ckipper_account_sync_resolve_bundle preferences | tr "\n" ","' + [[ "$output" == "prefs," ]] +} + +@test "_ckipper_account_sync_resolve_bundle claude-config = mcp,settings,hooks" { + run_in_zsh '_ckipper_account_sync_resolve_bundle claude-config | sort | tr "\n" ","' + [[ "$output" == "hooks,mcp,settings," ]] +} + +@test "_ckipper_account_sync_resolve_bundle returns input unchanged for non-bundle token" { + run_in_zsh '_ckipper_account_sync_resolve_bundle mcp | tr "\n" ","' + [[ "$output" == "mcp," ]] +} + +@test "_ckipper_account_sync_resolve_includes mixes types and bundles, dedups" { + run_in_zsh '_ckipper_account_sync_resolve_includes "preferences,mcp" "" | sort | tr "\n" ","' + [[ "$output" == "mcp,prefs," ]] +} + +@test "_ckipper_account_sync_resolve_includes subtracts excludes" { + run_in_zsh '_ckipper_account_sync_resolve_includes "all" "prefs,hooks" | sort | tr "\n" ","' + [[ "$output" == "agents,claude-md,commands,mcp,output-styles,settings,skills,statusline," ]] +} + +@test "_ckipper_account_sync_is_known_type returns 0 for known type" { + run_in_zsh '_ckipper_account_sync_is_known_type mcp && echo ok' + [[ "$output" == "ok" ]] +} + +@test "_ckipper_account_sync_is_known_type returns 1 for unknown" { + run_in_zsh '_ckipper_account_sync_is_known_type bogus && echo wrongly_ok || true' + [ "$status" -eq 0 ] + [[ "$output" != *"wrongly_ok"* ]] +} + +@test "bundle names never collide with type ids" { + run_in_zsh ' + for b in all customizations claude-config preferences; do + if (( ${+_CKIPPER_SYNC_TYPE_LABEL[$b]} )); then + echo "bundle $b collides with type id" + exit 1 + fi + done + echo OK' + [ "$status" -eq 0 ] + [[ "$output" == *"OK"* ]] +} diff --git a/lib/account/sync/strategies/files_dir.zsh b/lib/account/sync/strategies/files_dir.zsh new file mode 100644 index 0000000..5f1c36f --- /dev/null +++ b/lib/account/sync/strategies/files_dir.zsh @@ -0,0 +1,107 @@ +#!/usr/bin/env zsh +# Strategy module for "files-dir" sync types — per-directory items: +# - skills → /skills// +# +# Each item is a top-level entry under the type's subdir (regular dir OR +# symlink). cp -a preserves symlink semantics, so a destination's symlink +# resolves to the same target as the source's. +# +# Comparison is a recursive content hash (concatenation of per-file hashes +# in lexical order, then hashed). Symlinks are compared by their target +# path, NOT by the contents of the target (so two symlinks pointing at +# the same dir compare equal even if the shared target diverges later). + +# Per-type relative subdir under the account dir. +typeset -gA _CKIPPER_SYNC_FILES_DIR_PATH=( + [skills]="skills" +) + +# Enumerate top-level items under the type's subdir. Picks up dirs AND +# symlinks. We use a manual loop so broken symlinks also enumerate, with +# apply later catching the failure. +# +# Args: $1 — type id; $2 — source account dir. +# Returns: 0; prints "\t" per item. +_ckipper_account_sync_files_dir_enumerate() { + local type="$1" src="$2" + local sub="${_CKIPPER_SYNC_FILES_DIR_PATH[$type]}" + local root="$src/$sub" + [[ ! -d "$root" ]] && return 0 + local entry + for entry in "$root"/*(NDoN); do + [[ -d "$entry" || -L "$entry" ]] || continue + echo "$sub/${entry:t}\t${entry:t}" + done +} + +# Compare item by directory hash. +# +# Args: $1 — type id; $2 — src; $3 — dst; $4 — relpath. +# Returns: 0; prints "new" | "overwrite" | "unchanged". +_ckipper_account_sync_files_dir_compare() { + local type="$1" src="$2" dst="$3" rel="$4" + [[ ! -e "$dst/$rel" ]] && { echo "new"; return 0; } + local sh dh + sh=$(_ckipper_account_sync_hash_dir "$src/$rel") + dh=$(_ckipper_account_sync_hash_dir "$dst/$rel") + [[ "$sh" == "$dh" ]] && { echo "unchanged"; return 0; } + echo "overwrite" +} + +# Summary: file count delta if both sides exist, else status word. +# +# Args: $1 — type id; $2 — src; $3 — dst; $4 — relpath. +# Returns: 0; prints summary. +_ckipper_account_sync_files_dir_summary() { + local type="$1" src="$2" dst="$3" rel="$4" + local cmp_status; cmp_status=$(_ckipper_account_sync_files_dir_compare "$type" "$src" "$dst" "$rel") + case "$cmp_status" in + new) echo "new directory" ;; + overwrite) + local sn dn + sn=$(find "$src/$rel" -type f 2>/dev/null | wc -l | tr -d ' ') + dn=$(find "$dst/$rel" -type f 2>/dev/null | wc -l | tr -d ' ') + echo "overwrite — $dn → $sn files" + ;; + unchanged) echo "unchanged" ;; + esac +} + +# Diff: list of file changes via diff -rq between trees. For symlinks, +# print the target paths. +# +# Args: $1 — type id; $2 — src; $3 — dst; $4 — relpath. +# Returns: 0 always. +_ckipper_account_sync_files_dir_diff() { + local type="$1" src="$2" dst="$3" rel="$4" + if [[ -L "$src/$rel" || -L "$dst/$rel" ]]; then + echo "── source symlink ──" + [[ -L "$src/$rel" ]] && readlink "$src/$rel" + echo "── destination symlink ──" + [[ -L "$dst/$rel" ]] && readlink "$dst/$rel" + return 0 + fi + diff -ruN "$dst/$rel" "$src/$rel" 2>/dev/null + return 0 +} + +# Apply: backup the destination dir (if present), remove it, then cp -a. +# `rm -rf` is safe because the prior copy is in the backup dir; rollback +# restores it. +# +# Args: $1 — type id; $2 — src; $3 — dst; $4 — relpath; $5 — backup_dir. +# Returns: 0 on success; 1 on cp failure. +_ckipper_account_sync_files_dir_apply() { + local type="$1" src="$2" dst="$3" rel="$4" backup_dir="$5" + _ckipper_account_sync_backup_file "$backup_dir" "$dst/$rel" "$rel" || return 1 + rm -rf "$dst/$rel" + mkdir -p "$dst/${rel:h}" + cp -a "$src/$rel" "$dst/$rel" +} + +# Per-type wrappers for the strategy contract. +_ckipper_account_sync_skills_enumerate() { _ckipper_account_sync_files_dir_enumerate skills "$@"; } +_ckipper_account_sync_skills_compare() { _ckipper_account_sync_files_dir_compare skills "$@"; } +_ckipper_account_sync_skills_summary() { _ckipper_account_sync_files_dir_summary skills "$@"; } +_ckipper_account_sync_skills_diff() { _ckipper_account_sync_files_dir_diff skills "$@"; } +_ckipper_account_sync_skills_apply() { _ckipper_account_sync_files_dir_apply skills "$@"; } diff --git a/lib/account/sync/strategies/files_dir_test.bats b/lib/account/sync/strategies/files_dir_test.bats new file mode 100644 index 0000000..eb79d09 --- /dev/null +++ b/lib/account/sync/strategies/files_dir_test.bats @@ -0,0 +1,74 @@ +#!/usr/bin/env bats +# Unit tests for lib/account/sync/strategies/files_dir.zsh. + +load "${BATS_TEST_DIRNAME}/../../../../tests/lib/test-helper.bash" + +setup() { setup_isolated_env; } +teardown() { teardown_isolated_env; } + +run_in_zsh() { + run env HOME="$HOME" CKIPPER_DIR="$CKIPPER_DIR" TMP_HOME="$TMP_HOME" \ + zsh -c "source \"$REPO_ROOT/lib/account/sync/backup.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/_shared.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/strategies/files_dir.zsh\"; $*" +} + +@test "skills_enumerate lists each subdir under skills/" { + local src="$TMP_HOME/src" + mkdir -p "$src/skills/foo" "$src/skills/bar" + run_in_zsh "_ckipper_account_sync_skills_enumerate '$src' | cut -f2 | sort | tr '\n' ','" + [[ "$output" == *"bar,foo,"* ]] +} + +@test "skills_enumerate includes symlinks (treated as items)" { + local src="$TMP_HOME/src" + mkdir -p "$src/skills" "$TMP_HOME/shared/some-skill" + ln -s "$TMP_HOME/shared/some-skill" "$src/skills/some-skill" + run_in_zsh "_ckipper_account_sync_skills_enumerate '$src' | cut -f2" + [[ "$output" == *"some-skill"* ]] +} + +@test "skills_enumerate emits empty when no skills/ dir" { + local src="$TMP_HOME/src" + mkdir -p "$src" + run_in_zsh "_ckipper_account_sync_skills_enumerate '$src' | wc -l | tr -d ' '" + [[ "$output" == *"0"* ]] +} + +@test "skills_compare: new when dst lacks the dir" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src/skills/foo" "$dst/skills" + run_in_zsh "_ckipper_account_sync_skills_compare '$src' '$dst' skills/foo" + [[ "$output" == *"new"* ]] +} + +@test "skills_apply preserves symlinks via cp -a" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src/skills" "$dst/skills" + mkdir -p "$TMP_HOME/shared/sk1" + echo "skill content" > "$TMP_HOME/shared/sk1/SKILL.md" + ln -s "$TMP_HOME/shared/sk1" "$src/skills/sk1" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' src) + _ckipper_account_sync_manifest_init \"\$backup_dir\" src dst + _ckipper_account_sync_skills_apply '$src' '$dst' skills/sk1 \"\$backup_dir\" + [[ -L '$dst/skills/sk1' ]] && echo IS_SYMLINK || echo NOT_SYMLINK + readlink '$dst/skills/sk1'" + [[ "$output" == *"IS_SYMLINK"* ]] + [[ "$output" == *"$TMP_HOME/shared/sk1"* ]] +} + +@test "skills_apply copies regular directory recursively" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src/skills/foo" "$dst/skills" + echo "x" > "$src/skills/foo/SKILL.md" + echo "y" > "$src/skills/foo/extra.md" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' src) + _ckipper_account_sync_manifest_init \"\$backup_dir\" src dst + _ckipper_account_sync_skills_apply '$src' '$dst' skills/foo \"\$backup_dir\" + cat '$dst/skills/foo/SKILL.md' + cat '$dst/skills/foo/extra.md'" + [[ "$output" == *"x"* ]] + [[ "$output" == *"y"* ]] +} diff --git a/lib/account/sync/strategies/files_flat.zsh b/lib/account/sync/strategies/files_flat.zsh new file mode 100644 index 0000000..0db33cd --- /dev/null +++ b/lib/account/sync/strategies/files_flat.zsh @@ -0,0 +1,121 @@ +#!/usr/bin/env zsh +# Strategy module for "files-flat" sync types — flat .md file collections +# under the account dir: +# - claude-md → /CLAUDE.md (single file, not a directory) +# - agents → /agents/*.md +# - commands → /commands/*.md +# - output-styles → /output_styles/*.md +# +# All four types share the contract implementations; the only thing that +# varies is the subpath, declared in _CKIPPER_SYNC_FILES_FLAT_PATH. +# +# Items are identified by their relative path from the account dir +# (e.g. "agents/foo.md"). claude-md's single item id is "CLAUDE.md". + +# Per-type relative path under the account dir. +typeset -gA _CKIPPER_SYNC_FILES_FLAT_PATH=( + [claude-md]="CLAUDE.md" + [agents]="agents" + [commands]="commands" + [output-styles]="output_styles" +) + +# Generic enumerator. Lists items as "\t". +# - For claude-md: a single line iff CLAUDE.md exists. +# - For others: every *.md file under the subdir. +# +# Args: $1 — type id; $2 — source account dir. +# Returns: 0; prints items one per line. +_ckipper_account_sync_files_flat_enumerate() { + local type="$1" src="$2" + local sub="${_CKIPPER_SYNC_FILES_FLAT_PATH[$type]}" + local target="$src/$sub" + if [[ "$type" == "claude-md" ]]; then + [[ ! -f "$target" ]] && return 0 + echo "$sub\t$sub" + return 0 + fi + [[ ! -d "$target" ]] && return 0 + local f + for f in "$target"/*.md(N); do + echo "${f#$src/}\t${f:t}" + done +} + +# Generic compare via content hash. +# +# Args: $1 — type id; $2 — src; $3 — dst; $4 — relpath (the item id). +# Returns: 0; prints "new" | "overwrite" | "unchanged". +_ckipper_account_sync_files_flat_compare() { + local type="$1" src="$2" dst="$3" rel="$4" + [[ ! -f "$dst/$rel" ]] && { echo "new"; return 0; } + local sh dh + sh=$(_ckipper_account_sync_hash_file "$src/$rel") + dh=$(_ckipper_account_sync_hash_file "$dst/$rel") + [[ "$sh" == "$dh" ]] && { echo "unchanged"; return 0; } + echo "overwrite" +} + +# Generic summary: line-count diff via diff --stat-equivalent. +# +# Args: $1 — type id; $2 — src; $3 — dst; $4 — relpath. +# Returns: 0; prints "new" | "overwrite — +A/-D lines" | "unchanged". +_ckipper_account_sync_files_flat_summary() { + local type="$1" src="$2" dst="$3" rel="$4" + local cmp_status; cmp_status=$(_ckipper_account_sync_files_flat_compare "$type" "$src" "$dst" "$rel") + [[ "$cmp_status" != "overwrite" ]] && { echo "$cmp_status"; return 0; } + local stats; stats=$(_ckipper_account_sync_diff_line_stats "$dst/$rel" "$src/$rel") + echo "overwrite — $stats lines" +} + +# Generic diff: unified diff against destination. +# +# Args: $1 — type id; $2 — src; $3 — dst; $4 — relpath. +# Returns: 0 always (diff exit code 1 means "files differ", which is expected). +_ckipper_account_sync_files_flat_diff() { + local type="$1" src="$2" dst="$3" rel="$4" + diff -u "$dst/$rel" "$src/$rel" 2>/dev/null + return 0 +} + +# Generic apply: backup destination if present, then cp. +# +# Args: $1 — type id; $2 — src; $3 — dst; $4 — relpath; $5 — backup_dir. +# Returns: 0 on success; 1 on cp failure. +_ckipper_account_sync_files_flat_apply() { + local type="$1" src="$2" dst="$3" rel="$4" backup_dir="$5" + _ckipper_account_sync_backup_file "$backup_dir" "$dst/$rel" "$rel" || return 1 + mkdir -p "$dst/${rel:h}" + cp -a "$src/$rel" "$dst/$rel" +} + +# Per-type contract bindings — one-line wrappers over the generic helpers +# so each type satisfies the strategy naming convention. + +# claude-md wrappers +_ckipper_account_sync_claude-md_enumerate() { _ckipper_account_sync_files_flat_enumerate claude-md "$@"; } +_ckipper_account_sync_claude-md_compare() { _ckipper_account_sync_files_flat_compare claude-md "$@"; } +_ckipper_account_sync_claude-md_summary() { _ckipper_account_sync_files_flat_summary claude-md "$@"; } +_ckipper_account_sync_claude-md_diff() { _ckipper_account_sync_files_flat_diff claude-md "$@"; } +_ckipper_account_sync_claude-md_apply() { _ckipper_account_sync_files_flat_apply claude-md "$@"; } + +# agents wrappers +_ckipper_account_sync_agents_enumerate() { _ckipper_account_sync_files_flat_enumerate agents "$@"; } +_ckipper_account_sync_agents_compare() { _ckipper_account_sync_files_flat_compare agents "$@"; } +_ckipper_account_sync_agents_summary() { _ckipper_account_sync_files_flat_summary agents "$@"; } +_ckipper_account_sync_agents_diff() { _ckipper_account_sync_files_flat_diff agents "$@"; } +_ckipper_account_sync_agents_apply() { _ckipper_account_sync_files_flat_apply agents "$@"; } + +# commands wrappers +_ckipper_account_sync_commands_enumerate() { _ckipper_account_sync_files_flat_enumerate commands "$@"; } +_ckipper_account_sync_commands_compare() { _ckipper_account_sync_files_flat_compare commands "$@"; } +_ckipper_account_sync_commands_summary() { _ckipper_account_sync_files_flat_summary commands "$@"; } +_ckipper_account_sync_commands_diff() { _ckipper_account_sync_files_flat_diff commands "$@"; } +_ckipper_account_sync_commands_apply() { _ckipper_account_sync_files_flat_apply commands "$@"; } + +# output-styles wrappers +_ckipper_account_sync_output-styles_enumerate() { _ckipper_account_sync_files_flat_enumerate output-styles "$@"; } +_ckipper_account_sync_output-styles_compare() { _ckipper_account_sync_files_flat_compare output-styles "$@"; } +_ckipper_account_sync_output-styles_summary() { _ckipper_account_sync_files_flat_summary output-styles "$@"; } +_ckipper_account_sync_output-styles_diff() { _ckipper_account_sync_files_flat_diff output-styles "$@"; } +_ckipper_account_sync_output-styles_apply() { _ckipper_account_sync_files_flat_apply output-styles "$@"; } diff --git a/lib/account/sync/strategies/files_flat_test.bats b/lib/account/sync/strategies/files_flat_test.bats new file mode 100644 index 0000000..9cbf031 --- /dev/null +++ b/lib/account/sync/strategies/files_flat_test.bats @@ -0,0 +1,98 @@ +#!/usr/bin/env bats +# Unit tests for lib/account/sync/strategies/files_flat.zsh. + +load "${BATS_TEST_DIRNAME}/../../../../tests/lib/test-helper.bash" + +setup() { setup_isolated_env; } +teardown() { teardown_isolated_env; } + +run_in_zsh() { + run env HOME="$HOME" CKIPPER_DIR="$CKIPPER_DIR" TMP_HOME="$TMP_HOME" \ + zsh -c "source \"$REPO_ROOT/lib/account/sync/backup.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/_shared.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/strategies/files_flat.zsh\"; $*" +} + +@test "agents_enumerate lists .md files in agents/" { + local src="$TMP_HOME/src" + mkdir -p "$src/agents" + echo "a" > "$src/agents/foo.md" + echo "b" > "$src/agents/bar.md" + run_in_zsh "_ckipper_account_sync_agents_enumerate '$src' | cut -f1 | sort | tr '\n' ','" + [[ "$output" == *"agents/bar.md,agents/foo.md,"* ]] +} + +@test "commands_enumerate lists .md files in commands/" { + local src="$TMP_HOME/src" + mkdir -p "$src/commands" + echo "x" > "$src/commands/deploy.md" + run_in_zsh "_ckipper_account_sync_commands_enumerate '$src' | cut -f1" + [[ "$output" == *"commands/deploy.md"* ]] +} + +@test "claude-md_enumerate emits a single CLAUDE.md entry when present" { + local src="$TMP_HOME/src" + mkdir -p "$src" + echo "user memory" > "$src/CLAUDE.md" + run_in_zsh "_ckipper_account_sync_claude-md_enumerate '$src' | cut -f1" + [[ "$output" == *"CLAUDE.md"* ]] +} + +@test "claude-md_enumerate is empty when CLAUDE.md absent" { + local src="$TMP_HOME/src" + mkdir -p "$src" + run_in_zsh "_ckipper_account_sync_claude-md_enumerate '$src' | wc -l | tr -d ' '" + [[ "$output" == *"0"* ]] +} + +@test "files_flat_compare: new when destination lacks the file" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src/agents" "$dst/agents" + echo "x" > "$src/agents/foo.md" + run_in_zsh "_ckipper_account_sync_agents_compare '$src' '$dst' agents/foo.md" + [[ "$output" == *"new"* ]] +} + +@test "files_flat_compare: unchanged when contents match" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src/agents" "$dst/agents" + echo "x" > "$src/agents/foo.md" + echo "x" > "$dst/agents/foo.md" + run_in_zsh "_ckipper_account_sync_agents_compare '$src' '$dst' agents/foo.md" + [[ "$output" == *"unchanged"* ]] +} + +@test "files_flat_compare: overwrite when contents differ" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src/agents" "$dst/agents" + echo "new" > "$src/agents/foo.md" + echo "old" > "$dst/agents/foo.md" + run_in_zsh "_ckipper_account_sync_agents_compare '$src' '$dst' agents/foo.md" + [[ "$output" == *"overwrite"* ]] +} + +@test "files_flat_summary returns +N/-N line stats for overwrite" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + printf 'a\nb\nc\n' > "$src/CLAUDE.md" + printf 'a\nx\n' > "$dst/CLAUDE.md" + run_in_zsh "_ckipper_account_sync_claude-md_summary '$src' '$dst' CLAUDE.md" + [[ "$output" == *"+"* ]] + [[ "$output" == *"-"* ]] + [[ "$output" == *"lines"* ]] +} + +@test "files_flat_apply copies file with backup of prior content" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src/commands" "$dst/commands" + echo "new content" > "$src/commands/deploy.md" + echo "old content" > "$dst/commands/deploy.md" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' src) + _ckipper_account_sync_manifest_init \"\$backup_dir\" src dst + _ckipper_account_sync_commands_apply '$src' '$dst' commands/deploy.md \"\$backup_dir\" + cat '$dst/commands/deploy.md' + cat \"\$backup_dir/commands/deploy.md\"" + [[ "$output" == *"new content"* ]] + [[ "$output" == *"old content"* ]] +} diff --git a/lib/account/sync/strategies/hooks.zsh b/lib/account/sync/strategies/hooks.zsh new file mode 100644 index 0000000..394741a --- /dev/null +++ b/lib/account/sync/strategies/hooks.zsh @@ -0,0 +1,176 @@ +#!/usr/bin/env zsh +# Strategy module for "hooks" sync type — USER-WRITTEN HOOKS ONLY. +# +# A hook script /hooks/ is a sync candidate iff the filename +# does NOT match a ckipper-managed install hook in $CKIPPER_DIR/hooks/. +# This filter is computed at runtime so adding a new ckipper safety hook +# automatically excludes it from sync. +# +# Each enumerated item is (script-file, paired-settings-entry). The apply +# function does both: +# 1. Copy the script to the destination (with backup). +# 2. Rewrite the .hooks block in /settings.json to include the +# paired entry from /settings.json, with command paths rewritten +# to point at the destination's hooks dir. + +# Build the install-managed allowlist as a newline-separated set of basenames. +# +# Returns: 0; prints one filename per line. Empty if install hooks dir +# doesn't exist. +_ckipper_account_sync_hooks_install_allowlist() { + local install_dir="$CKIPPER_DIR/hooks" + [[ ! -d "$install_dir" ]] && return 0 + local f + for f in "$install_dir"/*(N); do + [[ -f "$f" ]] || continue + echo "${f:t}" + done +} + +# Enumerate user-written hooks in /hooks/ — files NOT in the install +# allowlist. +# +# Args: $1 — source account dir. +# Returns: 0; prints "\t" per item. +_ckipper_account_sync_hooks_enumerate() { + local src="$1" + local hooks_dir="$src/hooks" + [[ ! -d "$hooks_dir" ]] && return 0 + local allowlist; allowlist=$(_ckipper_account_sync_hooks_install_allowlist) + local f base + for f in "$hooks_dir"/*(N); do + [[ -f "$f" ]] || continue + base="${f:t}" + if [[ -n "$allowlist" ]] && echo "$allowlist" | grep -qx "$base"; then + continue + fi + echo "hooks/$base\t$base" + done +} + +# Compare: file content hash for the script. Settings entry coupling is +# transparent — if the script differs, the whole pair is treated as +# overwrite even if the .hooks entry is identical. +# +# Args: $1 — src; $2 — dst; $3 — relpath (hooks/). +# Returns: 0; prints "new" | "overwrite" | "unchanged". +_ckipper_account_sync_hooks_compare() { + local src="$1" dst="$2" rel="$3" + [[ ! -f "$dst/$rel" ]] && { echo "new"; return 0; } + local sh dh + sh=$(_ckipper_account_sync_hash_file "$src/$rel" 2>/dev/null) + dh=$(_ckipper_account_sync_hash_file "$dst/$rel" 2>/dev/null) + [[ "$sh" == "$dh" ]] && { echo "unchanged"; return 0; } + echo "overwrite" +} + +# Summary: line-count delta + " (paired settings entry)" annotation. +# +# Args: $1 — src; $2 — dst; $3 — relpath. +# Returns: 0; prints summary. +_ckipper_account_sync_hooks_summary() { + local src="$1" dst="$2" rel="$3" + local cmp_status; cmp_status=$(_ckipper_account_sync_hooks_compare "$src" "$dst" "$rel") + case "$cmp_status" in + new) echo "new — paired with settings.hooks entry" ;; + overwrite) + local stats; stats=$(_ckipper_account_sync_diff_line_stats "$dst/$rel" "$src/$rel") + echo "overwrite — $stats lines (+ settings entry)" + ;; + unchanged) echo "unchanged" ;; + esac +} + +# Diff: file diff plus a note about the settings entry. +_ckipper_account_sync_hooks_diff() { + local src="$1" dst="$2" rel="$3" + diff -u "$dst/$rel" "$src/$rel" 2>/dev/null + echo "── paired settings.json entry: rewritten for destination dir on apply ──" + return 0 +} + +# Apply: copy script + write paired settings.hooks entry with rewritten paths. +# +# Records the settings.json mutation in the manifest in addition to the +# script file. The engine's apply_one only records the script entry (its +# manifest_rel returns "" for hooks), but hooks_apply ALSO mutates +# settings.json — without an explicit manifest entry, rollback wouldn't +# restore settings.json and would leave dangling .hooks entries pointing +# at scripts that have just been deleted/restored. +# +# Multiple hooks in one sync each append a settings.json entry; rollback +# is idempotent so duplicate entries are harmless (each restore from the +# same backup yields the same pre-sync state). +# +# Args: $1 — src; $2 — dst; $3 — relpath; $4 — backup_dir. +# Returns: 0 on success; non-zero on cp/jq/write failure. +_ckipper_account_sync_hooks_apply() { + local src="$1" dst="$2" rel="$3" backup_dir="$4" + _ckipper_account_sync_backup_file "$backup_dir" "$dst/$rel" "$rel" || return 1 + local settings_op="overwrite"; [[ -e "$dst/settings.json" ]] || settings_op="create" + _ckipper_account_sync_backup_file "$backup_dir" "$dst/settings.json" "settings.json" || return 1 + _ckipper_account_sync_manifest_append "$backup_dir" "settings.json" "$settings_op" hooks "$rel" + mkdir -p "$dst/${rel:h}" + cp -a "$src/$rel" "$dst/$rel" || return 1 + chmod +x "$dst/$rel" 2>/dev/null + _ckipper_account_sync_hooks_merge_settings "$src" "$dst" "$rel" +} + +# Merge a single user-hook's paired settings.json entries from src into dst, +# rewriting absolute paths from src dir → dst dir. Operates by: +# 1. Filtering src settings.hooks entries to those whose .command +# references "/" (the user hook being synced). +# 2. Rewriting each .command path's src prefix → dst prefix. +# 3. Appending the filtered+rewritten entries to dst's per-event arrays +# (creating events that didn't exist on dst). +# +# Args: $1 — src; $2 — dst; $3 — script relpath (hooks/). +# Returns: 0 on success; non-zero on jq/write failure. +_ckipper_account_sync_hooks_merge_settings() { + local src="$1" dst="$2" rel="$3" + local src_settings="$src/settings.json" + local dst_settings="$dst/settings.json" + [[ ! -f "$src_settings" ]] && return 0 + [[ ! -f "$dst_settings" ]] && echo '{}' > "$dst_settings" + local script_basename="${rel:t}" + local filtered_src_hooks + filtered_src_hooks=$(_ckipper_account_sync_hooks_filter_src "$src_settings" "$script_basename" "$src" "$dst") + local merged + merged=$(jq --argjson src_hooks "$filtered_src_hooks" ' + .hooks //= {} | + reduce ($src_hooks | to_entries[]) as $event ( + .; + .hooks[$event.key] //= [] | + .hooks[$event.key] += $event.value + ) + ' "$dst_settings") + _ckipper_account_sync_json_atomic_write "$dst_settings" "$merged" +} + +# Helper: filter src settings.hooks to only those entries that reference +# the given script basename, with the src→dst path rewrite applied. +# +# Args: $1 — src settings.json path; $2 — script basename; +# $3 — src dir; $4 — dst dir. +# Returns: 0; prints filtered hooks JSON object (may be empty {}). +_ckipper_account_sync_hooks_filter_src() { + local src_settings="$1" script_basename="$2" src="$3" dst="$4" + # Literal split+join (NOT sub/gsub) — paths often contain regex + # metacharacters (`.`, `-`) and gsub would treat the src path as a regex, + # silently rewriting unrelated commands that happen to match the pattern. + jq --arg sb "$script_basename" --arg src "$src" --arg dst "$dst" ' + (.hooks // {}) + | to_entries + | map({ + key: .key, + value: ( + .value + | map(.hooks |= map(select(.command | tostring | contains("/" + $sb)))) + | map(select(.hooks | length > 0)) + | map(.hooks |= map(.command |= (split($src) | join($dst)))) + ) + }) + | map(select(.value | length > 0)) + | from_entries + ' "$src_settings" +} diff --git a/lib/account/sync/strategies/hooks_test.bats b/lib/account/sync/strategies/hooks_test.bats new file mode 100644 index 0000000..ebf9720 --- /dev/null +++ b/lib/account/sync/strategies/hooks_test.bats @@ -0,0 +1,146 @@ +#!/usr/bin/env bats +# Unit tests for lib/account/sync/strategies/hooks.zsh. + +load "${BATS_TEST_DIRNAME}/../../../../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env + # Simulate ckipper install hook set so the allowlist filter has data. + mkdir -p "$CKIPPER_DIR/hooks" + touch "$CKIPPER_DIR/hooks/bash-guardrails.sh" + touch "$CKIPPER_DIR/hooks/protect-claude-config.sh" + touch "$CKIPPER_DIR/hooks/docker-context.sh" + touch "$CKIPPER_DIR/hooks/notify-bell.sh" +} +teardown() { teardown_isolated_env; } + +run_in_zsh() { + run env HOME="$HOME" CKIPPER_DIR="$CKIPPER_DIR" TMP_HOME="$TMP_HOME" \ + zsh -c "source \"$REPO_ROOT/lib/account/sync/backup.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/_shared.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/strategies/structured.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/strategies/hooks.zsh\"; $*" +} + +@test "hooks_enumerate skips ckipper safety hooks (filename allowlist)" { + local src="$TMP_HOME/src" + mkdir -p "$src/hooks" + # Two safety hooks (mirroring install dir) — should be filtered. + echo "x" > "$src/hooks/bash-guardrails.sh" + echo "x" > "$src/hooks/notify-bell.sh" + # One user hook — should appear. + echo "x" > "$src/hooks/lint-on-save.sh" + run_in_zsh "_ckipper_account_sync_hooks_enumerate '$src' | cut -f1" + [[ "$output" == *"hooks/lint-on-save.sh"* ]] + [[ "$output" != *"bash-guardrails.sh"* ]] + [[ "$output" != *"notify-bell.sh"* ]] +} + +@test "hooks_enumerate emits empty when no user hooks present" { + local src="$TMP_HOME/src" + mkdir -p "$src/hooks" + echo "x" > "$src/hooks/bash-guardrails.sh" + run_in_zsh "_ckipper_account_sync_hooks_enumerate '$src' | wc -l | tr -d ' '" + [[ "$output" == *"0"* ]] +} + +@test "hooks_compare: new when destination lacks the script" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src/hooks" "$dst/hooks" + echo "x" > "$src/hooks/lint.sh" + run_in_zsh "_ckipper_account_sync_hooks_compare '$src' '$dst' hooks/lint.sh" + [[ "$output" == *"new"* ]] +} + +@test "hooks_filter_src: literal substring rewrite (not regex) when src path contains a dot" { + # src path has `.` (regex metachar); a separate command that happens to + # match the regex but NOT the literal substring must be left alone. + local src="$TMP_HOME/.claude-personal" dst="$TMP_HOME/.claude-work" + mkdir -p "$src/hooks" "$dst/hooks" + touch "$src/hooks/lint.sh" + # Settings hook command references TWO paths: + # A) the literal src path — should be rewritten to dst + # B) a different path that matches the src regex (`.` matches `-`) + # — must NOT be rewritten + cat > "$src/settings.json" < "$src/hooks/lint.sh" + cat > "$src/settings.json" < "$dst/settings.json" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' src) + _ckipper_account_sync_manifest_init \"\$backup_dir\" src dst + _ckipper_account_sync_hooks_apply '$src' '$dst' hooks/lint.sh \"\$backup_dir\" + cat '$dst/hooks/lint.sh' + jq -r '.hooks.PostToolUse[0].hooks[0].command' '$dst/settings.json'" + [[ "$output" == *"#!/bin/bash"* ]] + [[ "$output" == *"$dst/hooks/lint.sh"* ]] + [[ "$output" != *"$src/hooks/lint.sh"* ]] +} + +# Bug F: hooks_apply mutates settings.json (adding the paired .hooks entry) +# but the engine's apply_one only recorded a manifest entry for the script +# file (manifest_rel returns "" for hooks, which is the script relpath). +# The settings.json mutation was untracked, so a rollback after a hook sync +# left the destination with a phantom .hooks entry pointing at a deleted +# script. Fix: hooks_apply explicitly appends a settings.json manifest +# entry alongside the script entry. +@test "hooks_apply records settings.json mutation in the manifest (Bug F)" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src/hooks" "$dst/hooks" + echo "#!/bin/bash" > "$src/hooks/lint.sh" + cat > "$src/settings.json" < "$dst/settings.json" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' src) + _ckipper_account_sync_manifest_init \"\$backup_dir\" src dst + _ckipper_account_sync_hooks_apply '$src' '$dst' hooks/lint.sh \"\$backup_dir\" + jq -r '.files[].path' \"\$backup_dir/.ckipper-sync-manifest.json\" | sort | tr '\n' ','" + [[ "$output" == *"settings.json"* ]] +} + +# Bug F end-to-end: rollback after hook sync restores settings.json — without +# the manifest entry from the fix, rollback would leave the .hooks block +# polluted with the synced entry pointing at a (deleted) script. +@test "rollback after hook sync removes script AND restores settings.json (Bug F)" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src/hooks" "$dst/hooks" + echo "#!/bin/bash" > "$src/hooks/lint.sh" + cat > "$src/settings.json" < "$dst/settings.json" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' src) + _ckipper_account_sync_manifest_init \"\$backup_dir\" src dst + _ckipper_account_sync_hooks_apply '$src' '$dst' hooks/lint.sh \"\$backup_dir\" + # Sanity: post-apply, settings.json has the synced .hooks entry. + jq -r '.hooks.PostToolUse | length' '$dst/settings.json' + # Roll back. + _ckipper_account_sync_rollback_target \"\$backup_dir\" '$dst' + # The script must be gone… + [[ -f '$dst/hooks/lint.sh' ]] && echo SCRIPT_KEPT || echo SCRIPT_GONE + # …and settings.json restored to the pre-sync state (no .hooks block). + jq -r '.keep' '$dst/settings.json' + jq -e '.hooks' '$dst/settings.json' >/dev/null 2>&1 && echo STILL_HOOKED || echo CLEAN" + [[ "$output" == *"SCRIPT_GONE"* ]] + [[ "$output" == *"this"* ]] + [[ "$output" == *"CLEAN"* ]] +} diff --git a/lib/account/sync/strategies/statusline.zsh b/lib/account/sync/strategies/statusline.zsh new file mode 100644 index 0000000..eba54b5 --- /dev/null +++ b/lib/account/sync/strategies/statusline.zsh @@ -0,0 +1,140 @@ +#!/usr/bin/env zsh +# Strategy module for "statusline" sync type. +# +# Statusline lives at /settings.json `.statusLine` and may reference +# an executable script via .statusLine.command. Sync semantics: +# +# - settings reference (.statusLine.*): always copied +# - referenced script: copied IFF the path resolves to inside /. +# Otherwise (system path, shared script, etc.) the reference is copied +# verbatim without touching any file on the destination. +# +# Implementation depends on lib/account/sync/strategies/structured.zsh +# for _ckipper_account_sync_json_atomic_write and on lib/account/sync/backup.zsh +# for _ckipper_account_sync_backup_file. + +# Single item id constant — statusline is not enumerable per-element. +readonly _CKIPPER_SYNC_STATUSLINE_ID="statusLine" + +# Enumerate: emit a single "statusLine" entry iff source has one. +# +# Args: $1 — source account dir. +# Returns: 0; prints one line on hit, empty on miss. +_ckipper_account_sync_statusline_enumerate() { + local src="$1" + local file="$src/settings.json" + [[ ! -f "$file" ]] && return 0 + local has; has=$(jq -r '.statusLine // null' "$file" 2>/dev/null) + [[ "$has" == "null" ]] && return 0 + echo "$_CKIPPER_SYNC_STATUSLINE_ID\tStatus line" +} + +# Compare: shape varies. We treat the .statusLine subtree as one structured +# value (same approach as settings, but always at the .statusLine path). +# +# Args: $1 — src; $2 — dst; $3 — id (always "statusLine"). +# Returns: 0; prints "new" | "overwrite" | "unchanged". +_ckipper_account_sync_statusline_compare() { + local src="$1" dst="$2" + local s d + s=$(jq -c '.statusLine // null' "$src/settings.json" 2>/dev/null) + d=$(jq -c '.statusLine // null' "$dst/settings.json" 2>/dev/null) + _ckipper_account_sync_json_status "$s" "$d" +} + +# Resolve and detect whether the referenced script lives under /. +# Empty stdout = external (or no command); non-empty = absolute path +# inside src. +# +# Walks every whitespace-delimited token because real-world commands often +# use an interpreter prefix (e.g. "bash /path/to/script.sh", "node x.js", +# "python3 statusline.py"). Returns the first token that resolves to a path +# under the source dir. +# +# Args: $1 — src dir. +# Returns: 0; prints internal-script path or empty. +_ckipper_account_sync_statusline_internal_path() { + local src="$1" + local file="$src/settings.json" + [[ ! -f "$file" ]] && return 0 + local cmd; cmd=$(jq -r '.statusLine.command // empty' "$file" 2>/dev/null) + [[ -z "$cmd" ]] && return 0 + local token + for token in ${(z)cmd}; do + case "$token" in + "$src"/*) echo "$token"; return 0 ;; + esac + done +} + +# Summary: combines internal/external indicator with overwrite-or-new. +# +# Args: $1 — src; $2 — dst; $3 — id. +# Returns: 0; prints summary text. +_ckipper_account_sync_statusline_summary() { + local src="$1" dst="$2" + local cmp_status; cmp_status=$(_ckipper_account_sync_statusline_compare "$src" "$dst" "$_CKIPPER_SYNC_STATUSLINE_ID") + local internal; internal=$(_ckipper_account_sync_statusline_internal_path "$src") + local kind="external" + [[ -n "$internal" ]] && kind="internal (will copy script)" + case "$cmp_status" in + new) echo "new — $kind" ;; + overwrite) echo "overwrite — $kind" ;; + unchanged) echo "unchanged" ;; + esac +} + +# Diff: jq before/after of .statusLine. +_ckipper_account_sync_statusline_diff() { + local src="$1" dst="$2" + echo "── source ($src/settings.json:.statusLine) ──" + jq '.statusLine // null' "$src/settings.json" + echo "── destination ($dst/settings.json:.statusLine) ──" + jq '.statusLine // null' "$dst/settings.json" 2>/dev/null +} + +# Apply: settings.statusLine subtree is written via setpath (same approach +# as the settings strategy). If the referenced script is internal, copy +# it under the destination dir AND rewrite the .command path to the +# destination's location. +# +# Args: $1 — src; $2 — dst; $3 — id; $4 — backup_dir. +# Returns: 0 on success; non-zero on jq/cp/write failure. +_ckipper_account_sync_statusline_apply() { + local src="$1" dst="$2" id="$3" backup_dir="$4" + _ckipper_account_sync_backup_file "$backup_dir" "$dst/settings.json" "settings.json" || return 1 + [[ -f "$dst/settings.json" ]] || echo '{}' > "$dst/settings.json" + local internal; internal=$(_ckipper_account_sync_statusline_internal_path "$src") + local statusline_obj; statusline_obj=$(jq -c '.statusLine' "$src/settings.json") + if [[ -n "$internal" ]]; then + statusline_obj=$(_ckipper_account_sync_statusline_copy_and_rewrite \ + "$src" "$dst" "$internal" "$backup_dir" "$statusline_obj") || return 1 + fi + local merged + merged=$(jq --argjson v "$statusline_obj" '.statusLine = $v' "$dst/settings.json") + _ckipper_account_sync_json_atomic_write "$dst/settings.json" "$merged" +} + +# Internal-script branch: copies the script then rewrites .command in the +# given JSON to point at the destination's path. Appends a manifest entry +# for the script file so rollback can delete it (op=create) or restore the +# previous version (op=overwrite); without this, a mid-write crash leaves +# the script orphaned with no rollback record. +# +# Args: $1 — src dir; $2 — dst dir; $3 — internal script abs path; +# $4 — backup_dir; $5 — statusline JSON object. +# Returns: 0 on success (prints rewritten JSON); 1 on cp failure. +_ckipper_account_sync_statusline_copy_and_rewrite() { + local src="$1" dst="$2" internal="$3" backup_dir="$4" obj="$5" + local rel="${internal#$src/}" + local script_op="overwrite"; [[ -e "$dst/$rel" ]] || script_op="create" + _ckipper_account_sync_backup_file "$backup_dir" "$dst/$rel" "$rel" || return 1 + _ckipper_account_sync_manifest_append "$backup_dir" "$rel" "$script_op" statusline "$rel" + mkdir -p "$dst/${rel:h}" + cp -a "$internal" "$dst/$rel" || return 1 + # Literal split+join (NOT sub/gsub) — paths often contain regex + # metacharacters (`.`, `-`) and the interpreter-prefix form means + # the `$src` substring is not necessarily at position 0. + echo "$obj" | jq --arg src "$src" --arg dst "$dst" \ + '.command = (.command | split($src) | join($dst))' +} diff --git a/lib/account/sync/strategies/statusline_test.bats b/lib/account/sync/strategies/statusline_test.bats new file mode 100644 index 0000000..9d78b94 --- /dev/null +++ b/lib/account/sync/strategies/statusline_test.bats @@ -0,0 +1,159 @@ +#!/usr/bin/env bats +# Unit tests for lib/account/sync/strategies/statusline.zsh. + +load "${BATS_TEST_DIRNAME}/../../../../tests/lib/test-helper.bash" + +setup() { setup_isolated_env; } +teardown() { teardown_isolated_env; } + +run_in_zsh() { + run env HOME="$HOME" CKIPPER_DIR="$CKIPPER_DIR" TMP_HOME="$TMP_HOME" \ + zsh -c "source \"$REPO_ROOT/lib/account/sync/backup.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/_shared.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/strategies/structured.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/strategies/statusline.zsh\"; $*" +} + +@test "statusline_enumerate emits a single entry when statusLine is set" { + local src="$TMP_HOME/src" + mkdir -p "$src" + echo '{"statusLine":{"command":"/usr/bin/echo hi"}}' > "$src/settings.json" + run_in_zsh "_ckipper_account_sync_statusline_enumerate '$src' | cut -f1" + [[ "$output" == *"statusLine"* ]] +} + +@test "statusline_enumerate empty when statusLine missing" { + local src="$TMP_HOME/src" + mkdir -p "$src" + echo '{}' > "$src/settings.json" + run_in_zsh "_ckipper_account_sync_statusline_enumerate '$src' | wc -l | tr -d ' '" + [[ "$output" == *"0"* ]] +} + +@test "statusline_internal_script_path detects script inside src dir" { + local src="$TMP_HOME/src" + mkdir -p "$src" + echo "{\"statusLine\":{\"command\":\"$src/my-statusline.sh\"}}" > "$src/settings.json" + echo "#!/bin/bash" > "$src/my-statusline.sh" + run_in_zsh "_ckipper_account_sync_statusline_internal_path '$src'" + [[ "$output" == *"$src/my-statusline.sh"* ]] +} + +@test "statusline_internal_script_path returns empty when external" { + local src="$TMP_HOME/src" + mkdir -p "$src" + echo '{"statusLine":{"command":"/usr/bin/echo hi"}}' > "$src/settings.json" + run_in_zsh "out=\$(_ckipper_account_sync_statusline_internal_path '$src'); echo \"[\$out]\"" + [[ "$output" == *"[]"* ]] +} + +@test "statusline_internal_script_path detects interpreter-prefix command" { + local src="$TMP_HOME/src" + mkdir -p "$src" + echo "#!/bin/bash" > "$src/my-statusline.sh" + # The common real-world form: "bash " with an interpreter prefix. + echo "{\"statusLine\":{\"command\":\"bash $src/my-statusline.sh\"}}" > "$src/settings.json" + run_in_zsh "_ckipper_account_sync_statusline_internal_path '$src'" + [[ "$output" == *"$src/my-statusline.sh"* ]] +} + +@test "statusline_apply: interpreter-prefix internal — copy + rewrite path" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo "#!/bin/bash" > "$src/my-statusline.sh" + chmod +x "$src/my-statusline.sh" + echo "{\"statusLine\":{\"command\":\"bash $src/my-statusline.sh\"}}" > "$src/settings.json" + echo '{}' > "$dst/settings.json" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' src) + _ckipper_account_sync_manifest_init \"\$backup_dir\" src dst + _ckipper_account_sync_statusline_apply '$src' '$dst' statusLine \"\$backup_dir\" + jq -r '.statusLine.command' '$dst/settings.json' + ls '$dst/my-statusline.sh' && echo COPIED" + [[ "$output" == *"bash $dst/my-statusline.sh"* ]] + [[ "$output" == *"COPIED"* ]] + [[ "$output" != *"$src/my-statusline.sh"* ]] +} + +@test "statusline_apply: external script — settings only, no file copy" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo '{"statusLine":{"command":"/usr/bin/echo hi"}}' > "$src/settings.json" + echo '{}' > "$dst/settings.json" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' src) + _ckipper_account_sync_manifest_init \"\$backup_dir\" src dst + _ckipper_account_sync_statusline_apply '$src' '$dst' statusLine \"\$backup_dir\" + jq -r '.statusLine.command' '$dst/settings.json' + ls '$dst' | grep -c statusline.sh || true" + [[ "$output" == *"/usr/bin/echo hi"* ]] + [[ "$output" == *"0"* ]] +} + +@test "statusline_apply: internal script — copy file + rewrite reference" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo "#!/bin/bash" > "$src/my-statusline.sh" + chmod +x "$src/my-statusline.sh" + echo "{\"statusLine\":{\"command\":\"$src/my-statusline.sh\"}}" > "$src/settings.json" + echo '{}' > "$dst/settings.json" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' src) + _ckipper_account_sync_manifest_init \"\$backup_dir\" src dst + _ckipper_account_sync_statusline_apply '$src' '$dst' statusLine \"\$backup_dir\" + jq -r '.statusLine.command' '$dst/settings.json' + ls '$dst/my-statusline.sh' && echo COPIED" + [[ "$output" == *"$dst/my-statusline.sh"* ]] + [[ "$output" == *"COPIED"* ]] +} + +@test "statusline_apply: internal-script copy is recorded in manifest (rollback safety)" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo "#!/bin/bash" > "$src/my-statusline.sh" + chmod +x "$src/my-statusline.sh" + echo "{\"statusLine\":{\"command\":\"$src/my-statusline.sh\"}}" > "$src/settings.json" + echo '{}' > "$dst/settings.json" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' src) + _ckipper_account_sync_manifest_init \"\$backup_dir\" src dst + _ckipper_account_sync_statusline_apply '$src' '$dst' statusLine \"\$backup_dir\" + jq -r '.files[] | \"\(.operation)\t\(.path)\"' \"\$backup_dir\"/.ckipper-sync-manifest.json | sort" + # Both files must be in the manifest so a later rollback can restore them. + [[ "$output" == *"settings.json"* ]] + [[ "$output" == *"my-statusline.sh"* ]] +} + +# Bug D: statusline_compare lacked the empty-string guard that mcp_compare +# and settings_compare have. When the destination's settings.json was +# missing entirely (jq exits non-zero, d=""), the [[ "$d" == "null" ]] +# branch missed and the function returned "overwrite" instead of "new". +# That mislabeled the preview UI; with Bug E fixed, op-derivation in +# apply_one is independent of compare's verdict, so rollback remained +# correct — but the user-visible label was still wrong. +@test "statusline_compare returns 'new' when destination settings.json is missing (Bug D)" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo '{"statusLine":{"command":"/usr/bin/echo"}}' > "$src/settings.json" + # No settings.json on dst. + run_in_zsh "_ckipper_account_sync_statusline_compare '$src' '$dst' statusLine" + [[ "$output" == *"new"* ]] + [[ "$output" != *"overwrite"* ]] +} + +@test "statusline rollback removes orphaned script when op=create" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo "#!/bin/bash" > "$src/my-statusline.sh" + chmod +x "$src/my-statusline.sh" + echo "{\"statusLine\":{\"command\":\"$src/my-statusline.sh\"}}" > "$src/settings.json" + echo '{}' > "$dst/settings.json" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' src) + _ckipper_account_sync_manifest_init \"\$backup_dir\" src dst + _ckipper_account_sync_statusline_apply '$src' '$dst' statusLine \"\$backup_dir\" + _ckipper_account_sync_rollback_target \"\$backup_dir\" '$dst' + [[ -f '$dst/my-statusline.sh' ]] && echo STILL_THERE || echo GONE" + [[ "$output" == *"GONE"* ]] + [[ "$output" != *"STILL_THERE"* ]] +} diff --git a/lib/account/sync/strategies/structured.zsh b/lib/account/sync/strategies/structured.zsh new file mode 100644 index 0000000..da1213e --- /dev/null +++ b/lib/account/sync/strategies/structured.zsh @@ -0,0 +1,287 @@ +#!/usr/bin/env zsh +# Strategy module for "structured" sync types (JSON-key merges): +# - mcp → /.claude.json `.mcpServers` +# - settings → /settings.json (top-level + nested keys; .hooks excluded) +# - prefs → $CKIPPER_REGISTRY `.accounts..preferences` +# +# Each type implements the strategy contract documented in engine.zsh: +# _ckipper_account_sync__{enumerate,compare,summary,diff,apply} +# +# All apply functions go through _ckipper_account_sync_json_atomic_write which: +# 1. Writes the candidate JSON to a tmpfile +# 2. Validates with `jq -e .` +# 3. mv's into place ONLY if validation passes (Safeguard #4) + +# Validate a JSON file with jq. No output; exit code is the signal. +# +# Args: $1 — path to a JSON file (must exist). +# Returns: 0 if valid; non-zero if invalid or jq unavailable. +_ckipper_account_sync_json_validate() { + jq -e . "$1" >/dev/null 2>&1 +} + +# Write JSON to a target path atomically with validation. Steps: +# 1. mktemp peer of target +# 2. write the candidate JSON pretty-printed via jq +# 3. validate via _ckipper_account_sync_json_validate; abort on failure (no clobber) +# 4. mv into place +# +# Args: $1 — target path; $2 — candidate JSON string. +# Returns: 0 on commit; 1 on jq parse error; 2 on mv failure. +# Errors (stderr): "Refusing to write invalid JSON to " +_ckipper_account_sync_json_atomic_write() { + local target="$1" json="$2" + mkdir -p "${target:h}" + local tmp; tmp=$(mktemp "${target}.XXXXXX") + echo "$json" | jq '.' > "$tmp" 2>/dev/null + if ! _ckipper_account_sync_json_validate "$tmp"; then + echo "Refusing to write invalid JSON to $target" >&2 + rm -f "$tmp" + return 1 + fi + mv "$tmp" "$target" || return 2 +} + +# ── MCP strategy ───────────────────────────────────────────────────────── + +# Enumerate every MCP server name in the source's .claude.json. Empty stdout +# when the file is missing or .mcpServers is empty/null. +# +# Args: $1 — source account dir. +# Returns: 0; prints "\t" per line (id and display are the same here). +_ckipper_account_sync_mcp_enumerate() { + local src="$1" + local file="$src/.claude.json" + [[ ! -f "$file" ]] && return 0 + jq -r '.mcpServers // {} | keys[]? | "\(.)\t\(.)"' "$file" 2>/dev/null +} + +# Compare a single MCP server between source and destination. +# +# Args: $1 — src; $2 — dst; $3 — server name. +# Returns: 0; prints "new" | "overwrite" | "unchanged". +_ckipper_account_sync_mcp_compare() { + local src="$1" dst="$2" name="$3" + local s d + s=$(jq -c --arg n "$name" '.mcpServers[$n] // null' "$src/.claude.json" 2>/dev/null) + d=$(jq -c --arg n "$name" '.mcpServers[$n] // null' "$dst/.claude.json" 2>/dev/null) + _ckipper_account_sync_json_status "$s" "$d" +} + +# One-line summary of the change for the preview table. +# +# Args: $1 — src; $2 — dst; $3 — server name. +# Returns: 0; prints summary text. +_ckipper_account_sync_mcp_summary() { + local src="$1" dst="$2" name="$3" + local cmp_status; cmp_status=$(_ckipper_account_sync_mcp_compare "$src" "$dst" "$name") + case "$cmp_status" in + new) echo "new" ;; + overwrite) echo "overwrite — server config changed" ;; + unchanged) echo "unchanged" ;; + esac +} + +# Full diff for drill-down view: jq pretty-print of source vs destination. +# +# Args: $1 — src; $2 — dst; $3 — server name. +# Returns: 0; prints labeled before/after blocks. +_ckipper_account_sync_mcp_diff() { + local src="$1" dst="$2" name="$3" + echo "── source ($src/.claude.json:.mcpServers.$name) ──" + jq --arg n "$name" '.mcpServers[$n] // null' "$src/.claude.json" + echo "── destination ($dst/.claude.json:.mcpServers.$name) ──" + jq --arg n "$name" '.mcpServers[$n] // null' "$dst/.claude.json" 2>/dev/null +} + +# Merge a single server from src into dst's .claude.json. Backs up the +# destination's .claude.json before writing. +# +# Args: $1 — src; $2 — dst; $3 — server name; $4 — backup_dir. +# Returns: 0 on success; non-zero on jq/write failure. +_ckipper_account_sync_mcp_apply() { + local src="$1" dst="$2" name="$3" backup_dir="$4" + _ckipper_account_sync_backup_file "$backup_dir" "$dst/.claude.json" ".claude.json" || return 1 + local server_obj + server_obj=$(jq -c --arg n "$name" '.mcpServers[$n]' "$src/.claude.json") + [[ -f "$dst/.claude.json" ]] || echo '{}' > "$dst/.claude.json" + local merged + merged=$(jq --arg n "$name" --argjson v "$server_obj" \ + '.mcpServers = (.mcpServers // {}) | .mcpServers[$n] = $v' "$dst/.claude.json") + _ckipper_account_sync_json_atomic_write "$dst/.claude.json" "$merged" +} + +# ── Settings strategy ──────────────────────────────────────────────────── + +# Enumerate jq paths the user can sync. Recursion stops at scalars or +# top-level keys whose value is a primitive; objects are enumerated as +# their leaf paths so the user can sync just `.permissions.allow` without +# touching `.permissions.deny`. +# +# Excludes the `.hooks` block (the user-hooks sync type owns it) and the +# `.statusLine` subtree (the statusline sync type owns it — the dedicated +# strategy handles internal-script copy + path rewrite, which `settings` +# cannot do, so enumerating it here would silently plant broken absolute +# paths on the destination when sync runs without `statusline` included). +# +# The id column is the JSON-encoded path array (e.g. `["permissions","allow"]` +# or `["cleanup-period-days"]`). compare/diff/apply parse it back via +# `--argjson p` and use `getpath`/`setpath` — never `.foo.bar` filter strings, +# which jq would mis-parse for keys containing `-`, `.`, or other operator +# characters. The display column is the dotted string for the preview UI. +# +# Args: $1 — source account dir. +# Returns: 0; prints "\t" per line. +_ckipper_account_sync_settings_enumerate() { + local src="$1" + local file="$src/settings.json" + [[ ! -f "$file" ]] && return 0 + jq -r ' + . as $root + | [paths] + | map(select(([.[]] | map(type == "number") | any | not))) + | map(select(length > 0)) + | .[] + | . as $p + | select(($root | getpath($p)) | type != "object") + | ($p | map(tostring) | join(".")) as $display + | select($display | startswith("hooks") | not) + | select($display | startswith("statusLine") | not) + | "\($p | tojson)\t\($display)" + ' "$file" 2>/dev/null +} + +# Compare a jq-path between source and destination. +# +# Args: $1 — src; $2 — dst; $3 — JSON path array (e.g. '["model"]'). +# Returns: 0; prints "new" | "overwrite" | "unchanged". +_ckipper_account_sync_settings_compare() { + local src="$1" dst="$2" id="$3" + local s d + s=$(jq -c --argjson p "$id" 'getpath($p) // null' "$src/settings.json" 2>/dev/null) + d=$(jq -c --argjson p "$id" 'getpath($p) // null' "$dst/settings.json" 2>/dev/null) + _ckipper_account_sync_json_status "$s" "$d" +} + +# One-line summary for the preview table. +# +# Args: $1 — src; $2 — dst; $3 — JSON path array. +# Returns: 0; prints summary. +_ckipper_account_sync_settings_summary() { + local src="$1" dst="$2" id="$3" + local cmp_status; cmp_status=$(_ckipper_account_sync_settings_compare "$src" "$dst" "$id") + case "$cmp_status" in + new) echo "new key" ;; + overwrite) echo "overwrite — value changed" ;; + unchanged) echo "unchanged" ;; + esac +} + +# Full diff for drill-down. +# +# Args: $1 — src; $2 — dst; $3 — JSON path array. +# Returns: 0; prints labeled before/after. +_ckipper_account_sync_settings_diff() { + local src="$1" dst="$2" id="$3" + local display; display=$(jq -r -n --argjson p "$id" '$p | map(tostring) | join(".")') + echo "── source ($src/settings.json:$display) ──" + jq --argjson p "$id" 'getpath($p)' "$src/settings.json" + echo "── destination ($dst/settings.json:$display) ──" + jq --argjson p "$id" 'getpath($p)' "$dst/settings.json" 2>/dev/null +} + +# Apply: write the source value at the given path into the destination's +# settings.json using jq's `setpath`, preserving all sibling content. Uses +# atomic write + JSON validation gate. +# +# Args: $1 — src; $2 — dst; $3 — JSON path array; $4 — backup_dir. +# Returns: 0 on success; non-zero on jq/write failure. +_ckipper_account_sync_settings_apply() { + local src="$1" dst="$2" id="$3" backup_dir="$4" + _ckipper_account_sync_backup_file "$backup_dir" "$dst/settings.json" "settings.json" || return 1 + [[ -f "$dst/settings.json" ]] || echo '{}' > "$dst/settings.json" + local val_json + val_json=$(jq -c --argjson p "$id" 'getpath($p)' "$src/settings.json") + local merged + merged=$(jq --argjson p "$id" --argjson v "$val_json" \ + 'setpath($p; $v)' "$dst/settings.json") + _ckipper_account_sync_json_atomic_write "$dst/settings.json" "$merged" +} + +# ── Prefs strategy ─────────────────────────────────────────────────────── +# +# Operates on the registry (accounts.json), not per-account dirs. The engine +# passes account NAMES through the dir args. This is the only strategy that +# uses CKIPPER_REGISTRY rather than the dir paths. +# +# Depends on lib/core/schema.zsh (account-scope key list) and +# lib/core/config.zsh (_core_config_get/_core_config_set). + +# Enumerate every account-scope schema key. +# +# Args: $1 — source account name (unused; kept for contract uniformity). +# Returns: 0; prints "\t" per line. +_ckipper_account_sync_prefs_enumerate() { + local key + for key in "${(@k)_CKIPPER_SCHEMA_TYPE}"; do + [[ "${_CKIPPER_SCHEMA_SCOPE[$key]}" == "account" ]] || continue + echo "$key\t$key" + done +} + +# Compare one preference key. +# +# Args: $1 — source account name; $2 — dst account name; $3 — schema key. +# Returns: 0; prints "new" | "overwrite" | "unchanged". +_ckipper_account_sync_prefs_compare() { + local src="$1" dst="$2" key="$3" + local s d + s=$(_core_config_get "$key" "$src") + d=$(_core_config_get "$key" "$dst") + [[ "$s" == "$d" ]] && { echo "unchanged"; return 0; } + # `new` is rare for prefs — the v2 migration ensures every account has + # all keys with defaults. We still distinguish: if the destination has + # no override (raw read empty), call it new. + local raw; raw=$(_core_config_read_account "$key" "$dst") + [[ -z "$raw" ]] && { echo "new"; return 0; } + echo "overwrite" +} + +# Summary: for prefs the value is short, render inline. +# +# Args: $1 — src name; $2 — dst name; $3 — key. +# Returns: 0; prints e.g. "false → true" or "(default) → true". +_ckipper_account_sync_prefs_summary() { + local src="$1" dst="$2" key="$3" + local s d_raw d_eff + s=$(_core_config_get "$key" "$src") + d_raw=$(_core_config_read_account "$key" "$dst") + d_eff=$(_core_config_get "$key" "$dst") + if [[ -z "$d_raw" ]]; then + echo "(default $d_eff) → $s" + else + echo "$d_eff → $s" + fi +} + +# Diff: prefs are scalar — diff is the same as summary. +# +# Args: $1 — src name; $2 — dst name; $3 — key. +# Returns: 0; prints summary. +_ckipper_account_sync_prefs_diff() { + _ckipper_account_sync_prefs_summary "$@" +} + +# Apply via _core_config_set (uses registry locking). The registry write +# itself is its own atomic operation, so we do NOT need the JSON validation +# gate here. The backup is the file copy of CKIPPER_REGISTRY into the +# backup dir — recorded in the manifest as path "accounts.json". +# +# Args: $1 — src name; $2 — dst name; $3 — key; $4 — backup_dir. +# Returns: 0 on success; non-zero on read/write failure. +_ckipper_account_sync_prefs_apply() { + local src="$1" dst="$2" key="$3" backup_dir="$4" + _ckipper_account_sync_backup_file "$backup_dir" "$CKIPPER_REGISTRY" "accounts.json" || return 1 + local val; val=$(_core_config_get "$key" "$src") + _core_config_set "$key" "$val" "$dst" +} diff --git a/lib/account/sync/strategies/structured_test.bats b/lib/account/sync/strategies/structured_test.bats new file mode 100644 index 0000000..5f1513c --- /dev/null +++ b/lib/account/sync/strategies/structured_test.bats @@ -0,0 +1,313 @@ +#!/usr/bin/env bats +# Unit tests for lib/account/sync/strategies/structured.zsh. + +load "${BATS_TEST_DIRNAME}/../../../../tests/lib/test-helper.bash" + +setup() { setup_isolated_env; } +teardown() { teardown_isolated_env; } + +run_in_zsh() { + run env HOME="$HOME" CKIPPER_DIR="$CKIPPER_DIR" CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + TMP_HOME="$TMP_HOME" \ + zsh -c "source \"$REPO_ROOT/lib/account/sync/backup.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/_shared.zsh\"; \ + source \"$REPO_ROOT/lib/account/sync/strategies/structured.zsh\"; $*" +} + +@test "_ckipper_account_sync_json_validate accepts valid JSON" { + local f="$TMP_HOME/ok.json" + echo '{"a": 1}' > "$f" + run_in_zsh "_ckipper_account_sync_json_validate '$f' && echo OK" + [[ "$output" == *"OK"* ]] +} + +@test "_ckipper_account_sync_json_validate rejects invalid JSON" { + local f="$TMP_HOME/bad.json" + echo '{"a": 1' > "$f" + run_in_zsh "_ckipper_account_sync_json_validate '$f'" + [ "$status" -ne 0 ] +} + +@test "_ckipper_account_sync_json_atomic_write writes via tmp + mv" { + local f="$TMP_HOME/out.json" + run_in_zsh "_ckipper_account_sync_json_atomic_write '$f' '{\"x\":42}'; cat '$f'" + [[ "$output" == *'"x": 42'* ]] +} + +@test "_ckipper_account_sync_json_atomic_write refuses to commit invalid JSON" { + local f="$TMP_HOME/out2.json" + run_in_zsh "_ckipper_account_sync_json_atomic_write '$f' 'not-json'" + [ "$status" -ne 0 ] + [[ ! -f "$f" ]] +} + +# ── MCP strategy ───────────────────────────────────────────────────────── + +@test "mcp_enumerate lists every server name" { + local src="$TMP_HOME/src" + mkdir -p "$src" + echo '{"mcpServers":{"github":{"command":"x"},"vibma":{"command":"y"}}}' > "$src/.claude.json" + run_in_zsh "_ckipper_account_sync_mcp_enumerate '$src' | sort" + [[ "$output" == *"github"* ]] + [[ "$output" == *"vibma"* ]] +} + +@test "mcp_enumerate emits empty when no .claude.json" { + local src="$TMP_HOME/src" + mkdir -p "$src" + run_in_zsh "_ckipper_account_sync_mcp_enumerate '$src' | wc -l | tr -d ' '" + [[ "$output" == *"0"* ]] +} + +@test "mcp_compare: new when destination lacks the server" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo '{"mcpServers":{"github":{"command":"x"}}}' > "$src/.claude.json" + echo '{"mcpServers":{}}' > "$dst/.claude.json" + run_in_zsh "_ckipper_account_sync_mcp_compare '$src' '$dst' github" + [[ "$output" == *"new"* ]] +} + +@test "mcp_compare: unchanged when both sides match" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + local both='{"mcpServers":{"github":{"command":"x","args":["a"]}}}' + echo "$both" > "$src/.claude.json" + echo "$both" > "$dst/.claude.json" + run_in_zsh "_ckipper_account_sync_mcp_compare '$src' '$dst' github" + [[ "$output" == *"unchanged"* ]] +} + +@test "mcp_compare: overwrite when contents differ" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo '{"mcpServers":{"github":{"command":"new"}}}' > "$src/.claude.json" + echo '{"mcpServers":{"github":{"command":"old"}}}' > "$dst/.claude.json" + run_in_zsh "_ckipper_account_sync_mcp_compare '$src' '$dst' github" + [[ "$output" == *"overwrite"* ]] +} + +@test "mcp_compare: new when destination .claude.json does not exist" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo '{"mcpServers":{"github":{"command":"x"}}}' > "$src/.claude.json" + run_in_zsh "_ckipper_account_sync_mcp_compare '$src' '$dst' github" + [[ "$output" == *"new"* ]] + [[ "$output" != *"overwrite"* ]] +} + +@test "mcp_apply merges into destination preserving other servers" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo '{"mcpServers":{"github":{"command":"x"}}}' > "$src/.claude.json" + echo '{"mcpServers":{"other":{"command":"y"}},"foo":"bar"}' > "$dst/.claude.json" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' src) + _ckipper_account_sync_manifest_init \"\$backup_dir\" src dst + _ckipper_account_sync_mcp_apply '$src' '$dst' github \"\$backup_dir\" + jq '.mcpServers | keys | sort | join(\",\")' '$dst/.claude.json' + jq -r '.foo' '$dst/.claude.json'" + [[ "$output" == *'"github,other"'* ]] + [[ "$output" == *"bar"* ]] +} + +# ── Settings strategy ──────────────────────────────────────────────────── + +@test "settings_enumerate emits top-level keys (display column is dotted)" { + local src="$TMP_HOME/src" + mkdir -p "$src" + echo '{"env":{"FOO":"1"},"model":"opus"}' > "$src/settings.json" + run_in_zsh "_ckipper_account_sync_settings_enumerate '$src' | cut -f2 | sort | tr '\n' ','" + [[ "$output" == *"env.FOO"* ]] + [[ "$output" == *"model"* ]] +} + +@test "settings_enumerate id column is JSON path array" { + local src="$TMP_HOME/src" + mkdir -p "$src" + echo '{"env":{"FOO":"1"},"model":"opus"}' > "$src/settings.json" + run_in_zsh "_ckipper_account_sync_settings_enumerate '$src' | cut -f1 | sort | tr '\n' '|'" + [[ "$output" == *'["env","FOO"]'* ]] + [[ "$output" == *'["model"]'* ]] +} + +@test "settings_enumerate excludes .hooks and .statusLine (owned by other strategies)" { + local src="$TMP_HOME/src" + mkdir -p "$src" + echo '{"statusLine":{"command":"x"},"hooks":{"PreToolUse":[]},"model":"opus"}' > "$src/settings.json" + run_in_zsh "_ckipper_account_sync_settings_enumerate '$src' | cut -f2" + [[ "$output" != *"hooks"* ]] + [[ "$output" != *"statusLine"* ]] + [[ "$output" == *"model"* ]] +} + +@test "settings_enumerate produces nested jq paths for object-typed values" { + local src="$TMP_HOME/src" + mkdir -p "$src" + echo '{"permissions":{"allow":["Bash(ls:*)"],"deny":["Bash(rm:*)"]}}' > "$src/settings.json" + run_in_zsh "_ckipper_account_sync_settings_enumerate '$src' | cut -f2 | sort | tr '\n' ','" + [[ "$output" == *"permissions.allow,permissions.deny,"* ]] +} + +# Regression: hyphenated keys (most Claude Code settings — e.g. cleanup-period-days) +# previously got interpolated into a jq filter as `.cleanup-period-days`, which +# jq parsed as `.cleanup - .period - .days` (subtraction). settings_compare +# returned "new" for both source and destination (both jq calls errored to +# empty), and settings_apply aborted with a jq compile error mid-stream, +# rolling back the entire sync. +@test "settings_enumerate id is JSON-array-safe for hyphenated keys" { + local src="$TMP_HOME/src" + mkdir -p "$src" + echo '{"cleanup-period-days":7}' > "$src/settings.json" + run_in_zsh "_ckipper_account_sync_settings_enumerate '$src'" + [[ "$output" == *'["cleanup-period-days"]'* ]] + [[ "$output" == *"cleanup-period-days"* ]] +} + +@test "settings_compare handles hyphenated keys without jq compile error" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo '{"cleanup-period-days":7}' > "$src/settings.json" + echo '{"cleanup-period-days":14}' > "$dst/settings.json" + run_in_zsh "_ckipper_account_sync_settings_compare '$src' '$dst' '[\"cleanup-period-days\"]'" + [[ "$output" == *"overwrite"* ]] +} + +@test "settings_apply writes hyphenated keys correctly" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo '{"cleanup-period-days":7,"unrelated":"keep"}' > "$src/settings.json" + echo '{"unrelated":"keep","cleanup-period-days":14}' > "$dst/settings.json" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' src) + _ckipper_account_sync_manifest_init \"\$backup_dir\" src dst + _ckipper_account_sync_settings_apply '$src' '$dst' '[\"cleanup-period-days\"]' \"\$backup_dir\" || echo APPLY_FAILED + jq -r '.\"cleanup-period-days\"' '$dst/settings.json' + jq -r '.unrelated' '$dst/settings.json'" + [ "$status" -eq 0 ] + [[ "$output" != *"APPLY_FAILED"* ]] + [[ "$output" == *"7"* ]] + [[ "$output" == *"keep"* ]] +} + +# Regression: keys containing literal dots (rare, but legal JSON) used to be +# silently corrupted — settings_apply did `jq -n --arg p "$id" '$p | split(".")'` +# which split "some.key" into ["some","key"] and then setpath() built a nested +# structure. The actual `"some.key"` key was clobbered with a null or replaced +# entirely. Fix: keys travel as JSON-encoded path arrays, never split. +@test "settings_apply preserves keys with literal dots" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo '{"some.key":"value-from-src"}' > "$src/settings.json" + echo '{"some.key":"value-from-dst","unrelated":"keep"}' > "$dst/settings.json" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' src) + _ckipper_account_sync_manifest_init \"\$backup_dir\" src dst + _ckipper_account_sync_settings_apply '$src' '$dst' '[\"some.key\"]' \"\$backup_dir\" + jq -r '.\"some.key\"' '$dst/settings.json' + jq -r '.unrelated' '$dst/settings.json' + if jq -e 'has(\"some\") and (.some | type == \"object\")' '$dst/settings.json' >/dev/null 2>&1; then + echo NESTED_OBJECT_LEAKED + else + echo NO_NESTED_LEAK + fi" + [ "$status" -eq 0 ] + [[ "$output" == *"value-from-src"* ]] + [[ "$output" == *"keep"* ]] + [[ "$output" == *"NO_NESTED_LEAK"* ]] +} + +@test "settings_compare: new when path missing in destination" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo '{"model":"opus"}' > "$src/settings.json" + echo '{}' > "$dst/settings.json" + run_in_zsh "_ckipper_account_sync_settings_compare '$src' '$dst' '[\"model\"]'" + [[ "$output" == *"new"* ]] +} + +@test "settings_compare: unchanged when values match" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo '{"model":"opus"}' > "$src/settings.json" + echo '{"model":"opus"}' > "$dst/settings.json" + run_in_zsh "_ckipper_account_sync_settings_compare '$src' '$dst' '[\"model\"]'" + [[ "$output" == *"unchanged"* ]] +} + +@test "settings_compare: new when destination settings.json does not exist" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo '{"model":"opus"}' > "$src/settings.json" + run_in_zsh "_ckipper_account_sync_settings_compare '$src' '$dst' '[\"model\"]'" + [[ "$output" == *"new"* ]] + [[ "$output" != *"overwrite"* ]] +} + +@test "settings_apply writes nested path without disturbing siblings" { + local src="$TMP_HOME/src" dst="$TMP_HOME/dst" + mkdir -p "$src" "$dst" + echo '{"permissions":{"allow":["Bash(ls:*)"]}}' > "$src/settings.json" + echo '{"permissions":{"deny":["Bash(rm:*)"]},"unrelated":"keep"}' > "$dst/settings.json" + run_in_zsh " + backup_dir=\$(_ckipper_account_sync_backup_create '$dst' src) + _ckipper_account_sync_manifest_init \"\$backup_dir\" src dst + _ckipper_account_sync_settings_apply '$src' '$dst' '[\"permissions\",\"allow\"]' \"\$backup_dir\" + jq -c '.permissions.allow' '$dst/settings.json' + jq -c '.permissions.deny' '$dst/settings.json' + jq -r '.unrelated' '$dst/settings.json'" + [[ "$output" == *'["Bash(ls:*)"]'* ]] + [[ "$output" == *'["Bash(rm:*)"]'* ]] + [[ "$output" == *"keep"* ]] +} + +# ── Prefs strategy ─────────────────────────────────────────────────────── + +setup_prefs_registry() { + cat > "$CKIPPER_REGISTRY" <` to the matching _ckipper_config_* +# handler, prints overview help, and suggests the closest subcommand on a +# typo via _core_unknown_command (which handles the fuzzy match + help line). + +# Known config subcommands. Used both for routing and for fuzzy-suggest. +_CKIPPER_CONFIG_SUBCOMMANDS=(get set unset list edit help) + +# Dispatch a `config` subcommand. +# +# Args: +# $1 — subcommand name (get, set, unset, list, edit, help, -h, --help, or empty) +# $2..$N — arguments forwarded to the subcommand handler +# +# Returns: handler exit status; 1 on unknown subcommand. +# +# Errors (stderr): +# "Unknown command: ''. Did you mean: ''? ..." (via _core_unknown_command) +_ckipper_config_dispatch() { + local cmd="$1" + shift 2>/dev/null + case "$cmd" in + get|set|unset|list|edit) + # Honour the ` --help` contract documented in + # ckipper.zsh — short-circuit before invoking the handler so the + # user sees usage instead of a "missing required arg" error. + if [[ "$1" == "--help" || "$1" == "-h" ]]; then + _ckipper_config_help + return 0 + fi + "_ckipper_config_${cmd}" "$@" + ;; + ""|help|-h|--help) _ckipper_config_help ;; + *) _ckipper_config_unknown "$cmd"; return 1 ;; + esac +} + +# Print the unknown-subcommand line plus a help pointer. Always writes to stderr. +# +# Args: $1 — the unknown subcommand the user typed. +# Returns: 0 always. +_ckipper_config_unknown() { + _core_unknown_command "$1" \ + "Run 'ckipper config help' for available commands." \ + "${_CKIPPER_CONFIG_SUBCOMMANDS[@]}" +} + +# Print the config-namespace usage summary. +# +# Returns: 0 always. +_ckipper_config_help() { + _core_help_render "ckipper config — read and write Ckipper configuration" \ + "" \ + "Usage:" \ + " ckipper config get [--account ] Print the resolved value" \ + " ckipper config set [--account ] [value] Set a key (prompts if value omitted)" \ + " ckipper config unset [--account ] Remove an override (revert to default)" \ + " ckipper config list [--account ] [--format=fmt] List every key (table | json | env)" \ + " ckipper config edit [--account ] Open the underlying file in \$EDITOR" \ + "" \ + "Scope:" \ + " Global keys live in ~/.ckipper/docker/ckipper-config.zsh." \ + " Account-scoped keys live under accounts..preferences in the registry" \ + " and require --account on set/unset." +} diff --git a/lib/config/dispatcher_test.bats b/lib/config/dispatcher_test.bats new file mode 100644 index 0000000..1045ea5 --- /dev/null +++ b/lib/config/dispatcher_test.bats @@ -0,0 +1,125 @@ +#!/usr/bin/env bats +# Module-level tests for lib/config/dispatcher.zsh. +# Verifies routing of `ckipper config ` to the per-subcommand +# handlers, plus the unknown-subcommand path. The dispatcher and handlers are +# zsh-only, so each test spawns a zsh subshell that sources schema.zsh + +# core/config.zsh + core/fuzzy.zsh + core/registry.zsh + every config handler + +# dispatcher (matching the pattern in lib/core/config_test.bats). +# +# core/registry.zsh is sourced because handlers call _core_account_dir to +# validate that --account names refer to registered accounts (rejects typos +# that would otherwise silently create phantom registry records). + +load "${BATS_TEST_DIRNAME}/../../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env + mkdir -p "$CKIPPER_DIR/docker" + : >"$CKIPPER_DIR/docker/ckipper-config.zsh" + # v2 accounts.json fixture with one `work` account having empty preferences. + cat >"$CKIPPER_REGISTRY" <<'JSON' +{"version":2,"default":"work","accounts":{"work":{"config_dir":"/x","keychain_service":null,"registered_at":"t","preferences":{}}}} +JSON +} + +teardown() { + teardown_isolated_env +} + +# Helper: source schema + every config module in zsh and run zsh_cmd. +_run_config_dispatch() { + local zsh_cmd="$1" + run env HOME="$TMP_HOME" \ + CKIPPER_DIR="$CKIPPER_DIR" \ + CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + CKIPPER_REGISTRY_VERSION="${CKIPPER_REGISTRY_VERSION:-2}" \ + PATH="$PATH" \ + zsh -c " + source \"$REPO_ROOT/lib/core/schema.zsh\" + source \"$REPO_ROOT/lib/core/config.zsh\" + source \"$REPO_ROOT/lib/core/registry.zsh\" + source \"$REPO_ROOT/lib/core/fuzzy.zsh\" + source \"$REPO_ROOT/lib/core/style.zsh\" + source \"$REPO_ROOT/lib/core/help.zsh\" + source \"$REPO_ROOT/lib/config/get.zsh\" + source \"$REPO_ROOT/lib/config/set.zsh\" + source \"$REPO_ROOT/lib/config/unset.zsh\" + source \"$REPO_ROOT/lib/config/list.zsh\" + source \"$REPO_ROOT/lib/config/edit.zsh\" + source \"$REPO_ROOT/lib/config/dispatcher.zsh\" + $zsh_cmd + " +} + +@test "dispatcher routes set with explicit value to global key" { + _run_config_dispatch "_ckipper_config_dispatch set notify_bell false && _ckipper_config_dispatch get notify_bell" + + [ "$status" -eq 0 ] + [ "$output" = "false" ] +} + +@test "dispatcher routes set --account to account-scoped key" { + _run_config_dispatch "_ckipper_config_dispatch set --account work always_docker true && _ckipper_config_dispatch get --account work always_docker" + + [ "$status" -eq 0 ] + [ "$output" = "true" ] +} + +@test "dispatcher routes unset and reverts to schema default" { + _run_config_dispatch "_ckipper_config_dispatch set notify_bell false && _ckipper_config_dispatch unset notify_bell && _ckipper_config_dispatch get notify_bell" + + [ "$status" -eq 0 ] + [ "$output" = "true" ] +} + +@test "dispatcher rejects unknown key on set" { + _run_config_dispatch "_ckipper_config_dispatch set not_a_key value" + + [ "$status" -ne 0 ] +} + +@test "dispatcher unknown subcommand suggests help pointer" { + _run_config_dispatch "_ckipper_config_dispatch nope" + + [ "$status" -ne 0 ] + [[ "$output" =~ "config help" ]] +} + +# Regression: per the contract documented in ckipper.zsh:191-193, every +# namespace dispatcher must accept ` --help` as a synonym for +# overview help. Account/worktree dispatchers honoured this; config did not +# — `ckipper config get --help` returned "Unknown flag: '--help'". +@test "dispatcher routes 'set --help' to namespace help (does not run set)" { + _run_config_dispatch "_ckipper_config_dispatch set --help" + + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper config" ]] +} + +@test "dispatcher routes 'get -h' to namespace help" { + _run_config_dispatch "_ckipper_config_dispatch get -h" + + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper config" ]] +} + +@test "dispatcher routes 'unset --help' to namespace help" { + _run_config_dispatch "_ckipper_config_dispatch unset --help" + + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper config" ]] +} + +@test "dispatcher routes 'list --help' to namespace help" { + _run_config_dispatch "_ckipper_config_dispatch list --help" + + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper config" ]] +} + +@test "dispatcher routes 'edit --help' to namespace help" { + _run_config_dispatch "_ckipper_config_dispatch edit --help" + + [ "$status" -eq 0 ] + [[ "$output" =~ "ckipper config" ]] +} diff --git a/lib/config/edit.zsh b/lib/config/edit.zsh new file mode 100644 index 0000000..531d353 --- /dev/null +++ b/lib/config/edit.zsh @@ -0,0 +1,163 @@ +#!/usr/bin/env zsh +# `ckipper config edit` handler. Opens the global config file in $EDITOR, or +# round-trips an account's preferences JSON object through a tmpfile when +# called with --account. + +# Open the global config file in the user's preferred editor. No validation — +# the file is sourced lazily by ckipper.zsh on next shell startup. +# +# Returns: editor exit status. +# Errors (stderr): editor errors (e.g. "command not found") pass through +# unchanged from the underlying $EDITOR invocation. +_ckipper_config_edit_global() { + local file + file=$(_core_config_global_file) + mkdir -p "${file:h}" + [[ -f "$file" ]] || : >"$file" + "${EDITOR:-vi}" "$file" +} + +# Write an account's preferences JSON to a fresh tmpfile and return its path +# on stdout. Caller owns deletion. +# +# Args: $1 — account name. +# Returns: 0 on success; 1 on jq failure. +_ckipper_config_edit_dump_prefs() { + local account="$1" + local tmp + tmp=$(mktemp -t "ckipper-config-edit-XXXXXX") || return 1 + if ! jq --arg n "$account" '.accounts[$n].preferences // {}' "$CKIPPER_REGISTRY" >"$tmp"; then + rm -f "$tmp" + return 1 + fi + print -- "$tmp" +} + +# Validate that a tmpfile contains parseable JSON. +# +# Args: $1 — path to candidate JSON file. +# Returns: 0 if parseable; 1 otherwise. Errors go to stderr via jq. +_ckipper_config_edit_validate_json() { + # NB: zsh's `path` is tied to $PATH — declaring `local path=...` would wipe + # PATH for the duration of the function and break every external command. + local file="$1" + jq empty "$file" >/dev/null 2>&1 +} + +# Walk every key in the edited JSON file and verify each is an account-scoped +# schema key whose value passes the schema type check. Guards writeback from +# persisting unknown keys, wrong-typed values, or global-scope keys sneaked +# into an account's preferences (which would otherwise silently corrupt the +# resolution chain in _core_config_get). +# +# Args: $1 — path to edited JSON file. +# Returns: 0 when every key/value passes; 1 on the first violation. +# Errors (stderr): +# "Unknown config key in account preferences: ''" — when key not in schema. +# "Key '' is global-scope; cannot live in account preferences." — when scope=global. +# "Invalid value for '': ..." — propagated from _core_config_validate. +_ckipper_config_edit_validate_schema() { + local edited="$1" key value type scope + local -a edited_keys + edited_keys=( ${(f)"$(jq -r 'keys[]' "$edited")"} ) + for key in "${edited_keys[@]}"; do + type="${_CKIPPER_SCHEMA_TYPE[$key]:-}" + scope="${_CKIPPER_SCHEMA_SCOPE[$key]:-}" + if [[ -z "$type" ]]; then + echo "Unknown config key in account preferences: '$key'" >&2 + return 1 + fi + if [[ "$scope" != "account" ]]; then + echo "Key '$key' is global-scope; cannot live in account preferences." >&2 + return 1 + fi + value=$(jq -r --arg k "$key" '.[$k] | tostring' "$edited") + _core_config_validate "$key" "$value" || return 1 + done +} + +# Slurp the edited preferences JSON back into the registry under +# accounts..preferences. Routes through _core_registry_update so an +# `edit --account` racing with another writer (e.g. an `account add` or a +# `config set`) cannot lose updates. The --slurpfile side input pulls the +# edited file in at jq-call time, inside the lock. +# +# Args: $1 — account name, $2 — path to edited JSON file. +# Returns: 0 on success; 1 on lock-acquisition or jq/write failure. +_ckipper_config_edit_writeback() { + local account="$1" edited="$2" + _core_registry_update '.accounts[$n].preferences = $p[0]' \ + --arg n "$account" --slurpfile p "$edited" +} + +# Open an account's preferences in $EDITOR. Round-trip: dump → edit → +# validate (parse + schema) → writeback. Aborts (and leaves the registry +# untouched) on a parse failure or any schema violation. +# +# Args: $1 — account name. +# Returns: 0 on success; 1 on dump/validate/writeback failure or unregistered +# account. +# Errors (stderr): +# "Account '' is not registered." — propagated from _core_account_dir. +# "Edited file is not valid JSON; registry not updated." — when the edited +# tmpfile fails jq parse. +# schema-validation messages — propagated from _ckipper_config_edit_validate_schema. +_ckipper_config_edit_account() { + local account="$1" + _core_account_dir "$account" >/dev/null || return 1 + # Ensure the tmpfile is removed even if the user kills the editor (Ctrl-C) + # or the shell receives a TERM signal mid-edit. local_traps scopes the + # trap to this function so it doesn't leak to callers. + setopt local_options local_traps + local tmp + tmp=$(_ckipper_config_edit_dump_prefs "$account") || return 1 + trap 'rm -f "$tmp"' EXIT INT TERM + "${EDITOR:-vi}" "$tmp" + if ! _ckipper_config_edit_validate_json "$tmp"; then + echo "Edited file is not valid JSON; registry not updated." >&2 + return 1 + fi + _ckipper_config_edit_validate_schema "$tmp" || return 1 + _ckipper_config_edit_writeback "$account" "$tmp" +} + +# Public edit entry point. Routes between the global-file editor and the +# account-preferences round-trip per the --account flag. +# +# Args: $1..$N — `[--account ]`. +# +# Returns: editor / handler exit status; 1 on unknown flag, stray positional +# argument, or unregistered account. +# Errors (stderr): +# "Unknown flag: ''" — when an unrecognized --flag is encountered. +# "Flag --account requires a value." — when --account has no following arg. +# "ckipper config edit takes no positional arguments. Did you mean: --account ?" +# — when the user passes a bare positional (e.g. `ckipper config edit work`). +# "Account '' is not registered." — propagated from _core_account_dir +# via _ckipper_config_edit_account. +_ckipper_config_edit() { + _core_registry_check_version || return 1 + local account="" + while (( $# > 0 )); do + case "$1" in + --account) + [[ -z "${2:-}" ]] && { echo "Flag --account requires a value." >&2; return 1; } + account="$2"; shift 2 + ;; + --account=*) account="${1#--account=}"; shift ;; + -*) + echo "Unknown flag: '$1'" >&2 + return 1 + ;; + *) + echo "ckipper config edit takes no positional arguments. Did you mean: --account $1?" >&2 + return 1 + ;; + esac + done + if [[ -z "$account" ]]; then + _ckipper_config_edit_global + else + _ckipper_config_edit_account "$account" + fi +} diff --git a/lib/config/edit_test.bats b/lib/config/edit_test.bats new file mode 100644 index 0000000..cd5020e --- /dev/null +++ b/lib/config/edit_test.bats @@ -0,0 +1,203 @@ +#!/usr/bin/env bats +# Module-level tests for lib/config/edit.zsh. +# Verifies the round-trip flow used by `ckipper config edit --account `: +# dump → edit → validate → writeback. The handler is zsh-only and depends on +# the schema, core/config, and core/registry primitives. +# +# EDITOR mocking: tests use `EDITOR=true` for a no-op edit and a one-line +# zsh script for the "overwrite-with-garbage" case. The script lives in +# $TMP_HOME and is written per-test rather than in setup so each test owns +# its mock. + +load "${BATS_TEST_DIRNAME}/../../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env + mkdir -p "$CKIPPER_DIR/docker" + : >"$CKIPPER_DIR/docker/ckipper-config.zsh" + # v2 accounts.json fixture: one registered `work` account with an existing + # always_docker preference so writeback round-trips have something to read. + cat >"$CKIPPER_REGISTRY" <<'JSON' +{"version":2,"default":"work","accounts":{"work":{"config_dir":"/x","keychain_service":null,"registered_at":"t","preferences":{"always_docker":true}}}} +JSON +} + +teardown() { + teardown_isolated_env +} + +# Helper: source schema + core/config + core/registry + edit, then run cmd. +# EDITOR is forwarded so each test can swap in its own mock. +_run_config_edit() { + local zsh_cmd="$1" + run env HOME="$TMP_HOME" \ + CKIPPER_DIR="$CKIPPER_DIR" \ + CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + CKIPPER_REGISTRY_VERSION="${CKIPPER_REGISTRY_VERSION:-2}" \ + PATH="$PATH" \ + EDITOR="${EDITOR:-true}" \ + zsh -c " + source \"$REPO_ROOT/lib/core/schema.zsh\" + source \"$REPO_ROOT/lib/core/config.zsh\" + source \"$REPO_ROOT/lib/core/registry.zsh\" + source \"$REPO_ROOT/lib/config/edit.zsh\" + $zsh_cmd + " +} + +@test "edit --account round-trips successfully when EDITOR is no-op" { + local before + before=$(jq -S . "$CKIPPER_REGISTRY") + + EDITOR=true _run_config_edit "_ckipper_config_edit --account work" + + [ "$status" -eq 0 ] + local after + after=$(jq -S . "$CKIPPER_REGISTRY") + [ "$before" = "$after" ] + # Registry preferences for the account remain readable and intact. + run jq -r '.accounts.work.preferences.always_docker' "$CKIPPER_REGISTRY" + [ "$status" -eq 0 ] + [ "$output" = "true" ] +} + +@test "edit --account rejects malformed JSON and leaves registry untouched" { + local mock_editor="$TMP_HOME/bad-editor" + cat >"$mock_editor" <<'SH' +#!/usr/bin/env zsh +print -- "not json" >"$1" +SH + chmod +x "$mock_editor" + local before + before=$(jq -S . "$CKIPPER_REGISTRY") + + EDITOR="$mock_editor" _run_config_edit "_ckipper_config_edit --account work" + + [ "$status" -ne 0 ] + [[ "$output" == *"not valid JSON"* ]] + local after + after=$(jq -S . "$CKIPPER_REGISTRY") + [ "$before" = "$after" ] +} + +@test "edit --account on unregistered account fails with registry error" { + local before + before=$(jq -S . "$CKIPPER_REGISTRY") + + EDITOR=true _run_config_edit "_ckipper_config_edit --account ghost" + + [ "$status" -ne 0 ] + [[ "$output" == *"Account 'ghost' is not registered."* ]] + local after + after=$(jq -S . "$CKIPPER_REGISTRY") + [ "$before" = "$after" ] +} + +@test "edit with no flags opens the global config file" { + # Seed the global file with a sentinel so we can detect that EDITOR + # received it as its argument (EDITOR=cat prints the file's contents). + local sentinel="CKIPPER_TEST_SENTINEL=42" + echo "$sentinel" >>"$CKIPPER_DIR/docker/ckipper-config.zsh" + + EDITOR=cat _run_config_edit "_ckipper_config_edit" + + [ "$status" -eq 0 ] + [[ "$output" == *"$sentinel"* ]] +} + +@test "edit rejects positional arg with helpful suggestion" { + EDITOR=true _run_config_edit "_ckipper_config_edit work" + + [ "$status" -ne 0 ] + [[ "$output" == *"takes no positional arguments"* ]] + [[ "$output" == *"--account work"* ]] +} + +# I-1 regression: writeback must route through _core_registry_update so +# concurrent writers can't lose updates. Stubbing _core_registry_update to +# drop a marker proves the routing — bypass paths that mktemp+mv directly +# never invoke the stub. +@test "edit --account writeback routes through _core_registry_update" { + local marker="$CKIPPER_DIR/_registry_update_called" + + EDITOR=true _run_config_edit " + _core_registry_update() { : > '$marker'; return 0; } + _ckipper_config_edit --account work + " + + [ "$status" -eq 0 ] + [ -f "$marker" ] +} + +# I-2 regression: edited account preferences must be schema-validated before +# writeback. The previous implementation only ran `jq empty`, so users could +# add unknown keys, give known keys a wrong-typed value, or sneak in a +# global-scope key — and the writeback would silently persist all of it. +# +# Each test seeds a mock editor that overwrites the dumped file with the +# specified malformed JSON, then asserts the writeback aborts and the +# registry remains untouched. + +# Write a one-shot zsh `editor` that overwrites $1 with the supplied JSON. +# Returns the path to the editor on stdout; the caller chmods+exports it. +_make_editor_writing() { + local json="$1" path="$TMP_HOME/edit-stub-$$" + cat >"$path" <"\$1" +SH + chmod +x "$path" + echo "$path" +} + +@test "edit --account rejects unknown key and leaves registry untouched" { + local editor; editor=$(_make_editor_writing '{"not_a_real_key": true}') + local before; before=$(jq -S . "$CKIPPER_REGISTRY") + + EDITOR="$editor" _run_config_edit "_ckipper_config_edit --account work" + + [ "$status" -ne 0 ] + [[ "$output" == *"not_a_real_key"* ]] + local after; after=$(jq -S . "$CKIPPER_REGISTRY") + [ "$before" = "$after" ] +} + +@test "edit --account rejects wrong-typed value for known key and leaves registry untouched" { + local editor; editor=$(_make_editor_writing '{"always_docker": "rm -rf /"}') + local before; before=$(jq -S . "$CKIPPER_REGISTRY") + + EDITOR="$editor" _run_config_edit "_ckipper_config_edit --account work" + + [ "$status" -ne 0 ] + [[ "$output" == *"always_docker"* ]] + local after; after=$(jq -S . "$CKIPPER_REGISTRY") + [ "$before" = "$after" ] +} + +@test "edit --account rejects global-scope key in account preferences" { + # notify_bell is a real schema key with scope=global; it must not appear in + # an account's preferences block even though the type matches. + local editor; editor=$(_make_editor_writing '{"notify_bell": true}') + local before; before=$(jq -S . "$CKIPPER_REGISTRY") + + EDITOR="$editor" _run_config_edit "_ckipper_config_edit --account work" + + [ "$status" -ne 0 ] + [[ "$output" == *"notify_bell"* ]] + local after; after=$(jq -S . "$CKIPPER_REGISTRY") + [ "$before" = "$after" ] +} + +@test "edit --account accepts valid edits with all schema keys" { + local editor; editor=$(_make_editor_writing '{"always_docker": false, "always_firewall": true, "ssh_forward": false}') + + EDITOR="$editor" _run_config_edit "_ckipper_config_edit --account work" + + [ "$status" -eq 0 ] + run jq -r '.accounts.work.preferences.always_docker' "$CKIPPER_REGISTRY" + [ "$output" = "false" ] + run jq -r '.accounts.work.preferences.always_firewall' "$CKIPPER_REGISTRY" + [ "$output" = "true" ] + run jq -r '.accounts.work.preferences.ssh_forward' "$CKIPPER_REGISTRY" + [ "$output" = "false" ] +} diff --git a/lib/config/get.zsh b/lib/config/get.zsh new file mode 100644 index 0000000..dd8f764 --- /dev/null +++ b/lib/config/get.zsh @@ -0,0 +1,58 @@ +#!/usr/bin/env zsh +# `ckipper config get` handler. Thin wrapper over _core_config_get that adds +# CLI-level argument parsing and schema-membership validation. + +# Verify the user supplied a key and that it exists in the schema. Surfaces +# both the usage line and the unknown-key error so the caller can treat the +# result as a single validation gate. +# +# Args: $1 — candidate key (may be empty). +# Returns: 0 if the key is non-empty and in the schema; 1 otherwise. +# Errors (stderr): +# "Usage: ckipper config get [--account ] " — when no key. +# "Unknown config key: ''" — key not in schema. +_ckipper_config_get_validate_key() { + local key="$1" + if [[ -z "$key" ]]; then + echo "Usage: ckipper config get [--account ] " >&2 + return 1 + fi + if [[ -z "${_CKIPPER_SCHEMA_TYPE[$key]:-}" ]]; then + echo "Unknown config key: '$key'" >&2 + return 1 + fi +} + +# Print the resolved value of a configuration key. +# +# Args: +# $1..$N — `[--account ] `. The flag and its value may appear in +# any order before the positional ; only one --account is read. +# +# Returns: 0 on success; 1 on missing key, unknown key, unknown flag, or +# unregistered account. +# +# Errors (stderr): +# "Usage: ckipper config get [--account ] " — when no key supplied. +# "Unknown config key: ''" — when key is not in the schema. +# "Unknown flag: ''" — when an unrecognized flag is encountered. +# "Flag --account requires a value." — when --account has no following arg. +# "Account '' is not registered." — propagated from _core_account_dir. +_ckipper_config_get() { + _core_registry_check_version || return 1 + local account="" key="" + while (( $# > 0 )); do + case "$1" in + --account) + [[ -z "${2:-}" ]] && { echo "Flag --account requires a value." >&2; return 1; } + account="$2"; shift 2 + ;; + --account=*) account="${1#--account=}"; shift ;; + -*) echo "Unknown flag: '$1'" >&2; return 1 ;; + *) key="$1"; shift ;; + esac + done + _ckipper_config_get_validate_key "$key" || return 1 + [[ -n "$account" ]] && { _core_account_dir "$account" >/dev/null || return 1; } + _core_config_get "$key" "$account" +} diff --git a/lib/config/list.zsh b/lib/config/list.zsh new file mode 100644 index 0000000..e7702b8 --- /dev/null +++ b/lib/config/list.zsh @@ -0,0 +1,147 @@ +#!/usr/bin/env zsh +# `ckipper config list` handler. Renders every effective configuration key in +# one of three formats: table (default), json, env. +# +# Account-scope filtering: account-scoped keys (those with SCOPE=="account") +# are emitted only when the caller passes `--account `. Without that +# flag, only global keys appear — printing account-scoped defaults without an +# account would be misleading because their effective value is per-account. +# +# Phase-2 dependency: _core_style_header / _core_style_divider live in +# lib/core/style.zsh (not yet landed). Tests stub them; production callers +# source style.zsh from ckipper.zsh before list.zsh. + +# Decide whether a schema key should appear in the listing for the current +# scope choice. Global keys always appear; account-scoped keys appear only +# when --account was supplied. +# +# Args: $1 — schema key, $2 — account name ("" when --account omitted). +# Returns: 0 if the key should be listed; 1 if it should be skipped. +_ckipper_config_list_should_include() { + local key="$1" account="$2" + local scope="${_CKIPPER_SCHEMA_SCOPE[$key]:-global}" + [[ "$scope" == "global" ]] && return 0 + [[ "$scope" == "account" && -n "$account" ]] && return 0 + return 1 +} + +# Print sorted list of keys this invocation will emit, one per line. +# +# Args: $1 — account name ("" when --account omitted). +# Returns: 0 always. Stdout: newline-separated keys in lexical order. +_ckipper_config_list_keys() { + local account="$1" key + for key in "${(@kon)_CKIPPER_SCHEMA_TYPE}"; do + if _ckipper_config_list_should_include "$key" "$account"; then + print -- "$key" + fi + done +} + +# Render the table format: header, divider, then `=` lines. +# +# Args: $1 — account name ("" when --account omitted). +# Returns: 0 always. +_ckipper_config_list_table() { + local account="$1" key value + _core_style_header "Ckipper config" + _core_style_divider + while IFS= read -r key; do + value=$(_core_config_get "$key" "$account") + print -- "$key=$value" + done < <(_ckipper_config_list_keys "$account") +} + +# Render the JSON format. Builds the object key-by-key with jq so values +# containing quotes, backslashes, or other JSON metacharacters are encoded +# safely. Output is a single JSON object on stdout. +# +# Args: $1 — account name ("" when --account omitted). +# Returns: 0 always. +_ckipper_config_list_json() { + local account="$1" key value + local doc='{}' + while IFS= read -r key; do + value=$(_core_config_get "$key" "$account") + doc=$(jq --arg k "$key" --arg v "$value" '. + {($k): $v}' <<<"$doc") + done < <(_ckipper_config_list_keys "$account") + print -- "$doc" +} + +# Render the env format: `CKIPPER_=` lines, one per key. +# +# Args: $1 — account name ("" when --account omitted). +# Returns: 0 always. +_ckipper_config_list_env() { + local account="$1" key value var + while IFS= read -r key; do + value=$(_core_config_get "$key" "$account") + var=$(_core_config_global_var "$key") + print -- "$var=$value" + done < <(_ckipper_config_list_keys "$account") +} + +# Validate the format token against the supported renderers. +# +# Args: $1 — format string. +# Returns: 0 if recognized; 1 otherwise. +# Errors (stderr): "Unknown format: '' (expected: table, json, env)". +_ckipper_config_list_validate_format() { + case "$1" in + table | json | env) return 0 ;; + esac + echo "Unknown format: '$1' (expected: table, json, env)" >&2 + return 1 +} + +# Dispatch to the renderer matching the resolved format. Caller is responsible +# for having validated the format already via _ckipper_config_list_validate_format. +# +# Args: $1 — format ("table" | "json" | "env"), $2 — account ("" if global). +# Returns: renderer's exit status. +_ckipper_config_list_render() { + local format="$1" account="$2" + case "$format" in + table) _ckipper_config_list_table "$account" ;; + json) _ckipper_config_list_json "$account" ;; + env) _ckipper_config_list_env "$account" ;; + esac +} + +# Public list entry point. Parses flags then delegates to a format printer. +# +# Args: $1..$N — `[--account ] [--format=table|json|env]`. +# +# Returns: 0 on success; 1 on argument-parse failure or unregistered account. +# +# Errors (stderr): +# "Unknown flag: ''" — when an unrecognized argument is encountered. +# "Flag --account requires a value." — when --account has no following arg. +# "Flag --format requires a value." — when --format has no following arg. +# "Unknown format: '' (expected: table, json, env)" — invalid format. +# "Account '' is not registered." — propagated from _core_account_dir. +_ckipper_config_list() { + _core_registry_check_version || return 1 + local account="" format="table" + while (( $# > 0 )); do + case "$1" in + --account) + [[ -z "${2:-}" ]] && { echo "Flag --account requires a value." >&2; return 1; } + account="$2"; shift 2 + ;; + --account=*) account="${1#--account=}"; shift ;; + --format=*) format="${1#--format=}"; shift ;; + --format) + [[ -z "${2:-}" ]] && { echo "Flag --format requires a value." >&2; return 1; } + format="$2"; shift 2 + ;; + *) + echo "Unknown flag: '$1'" >&2 + return 1 + ;; + esac + done + _ckipper_config_list_validate_format "$format" || return 1 + [[ -n "$account" ]] && { _core_account_dir "$account" >/dev/null || return 1; } + _ckipper_config_list_render "$format" "$account" +} diff --git a/lib/config/list_test.bats b/lib/config/list_test.bats new file mode 100644 index 0000000..21301c4 --- /dev/null +++ b/lib/config/list_test.bats @@ -0,0 +1,71 @@ +#!/usr/bin/env bats +# Module-level tests for lib/config/list.zsh. +# Verifies the three output formats (table, json, env) and account-scope +# filtering. The handler is zsh-only and depends on Phase-2 style helpers +# (_core_style_header / _core_style_divider) — those are stubbed in the +# zsh -c payload before sourcing list.zsh. + +load "${BATS_TEST_DIRNAME}/../../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env + mkdir -p "$CKIPPER_DIR/docker" + : >"$CKIPPER_DIR/docker/ckipper-config.zsh" + cat >"$CKIPPER_REGISTRY" <<'JSON' +{"version":2,"default":"work","accounts":{"work":{"config_dir":"/x","keychain_service":null,"registered_at":"t","preferences":{}}}} +JSON +} + +teardown() { + teardown_isolated_env +} + +# Helper: source schema + core/config + Phase-2 style stubs + list, then run cmd. +_run_config_list() { + local zsh_cmd="$1" + run env HOME="$TMP_HOME" \ + CKIPPER_DIR="$CKIPPER_DIR" \ + CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + CKIPPER_REGISTRY_VERSION="${CKIPPER_REGISTRY_VERSION:-2}" \ + PATH="$PATH" \ + zsh -c " + source \"$REPO_ROOT/lib/core/schema.zsh\" + source \"$REPO_ROOT/lib/core/config.zsh\" + source \"$REPO_ROOT/lib/core/registry.zsh\" + _core_style_header() { print -- \"## \$1\"; } + _core_style_divider() { print -- \"---\"; } + source \"$REPO_ROOT/lib/config/list.zsh\" + $zsh_cmd + " +} + +@test "list table includes every global key" { + _run_config_list "_ckipper_config_list" + + [ "$status" -eq 0 ] + [[ "$output" == *"notify_bell"* ]] + [[ "$output" == *"default_branch"* ]] + [[ "$output" == *"dep_install_cmd"* ]] +} + +@test "list json is valid JSON" { + _run_config_list "_ckipper_config_list --format=json" + + [ "$status" -eq 0 ] + echo "$output" | jq empty +} + +@test "list env emits CKIPPER_ lines" { + _run_config_list "_ckipper_config_list --format=env" + + [ "$status" -eq 0 ] + [[ "$output" == *"CKIPPER_NOTIFY_BELL="* ]] +} + +@test "list --account adds account-scoped keys" { + _run_config_list "_ckipper_config_list --account work" + + [ "$status" -eq 0 ] + [[ "$output" == *"always_docker"* ]] + [[ "$output" == *"ssh_forward"* ]] +} diff --git a/lib/config/set.zsh b/lib/config/set.zsh new file mode 100644 index 0000000..5d44001 --- /dev/null +++ b/lib/config/set.zsh @@ -0,0 +1,113 @@ +#!/usr/bin/env zsh +# `ckipper config set` handler. Thin wrapper over _core_config_set that adds +# CLI-level argument parsing, schema-membership validation, and an interactive +# value-prompt fallback when the user omits the value argument. +# +# The prompt fallback delegates to _core_prompt_input (lib/core/prompt.zsh), +# which honors CKIPPER_NO_GUM=1 and reads from stdin in the fallback path. + +# Verify the user supplied a key and that it exists in the schema. Surfaces +# both the no-key usage line and the unknown-key error so the caller can +# treat the result as a single validation gate. +# +# Args: $1 — candidate key (may be empty). +# Returns: 0 if the key is non-empty and in the schema; 1 otherwise. +# Errors (stderr): +# "Usage: ckipper config set [--account ] [value]" — no key. +# "Unknown config key: ''" — key not in schema. +_ckipper_config_set_validate_key() { + local key="$1" + if [[ -z "$key" ]]; then + echo "Usage: ckipper config set [--account ] [value]" >&2 + return 1 + fi + if [[ -z "${_CKIPPER_SCHEMA_TYPE[$key]:-}" ]]; then + echo "Unknown config key: '$key'" >&2 + return 1 + fi +} + +# Module-level scratch globals populated by _ckipper_config_set_parse_args. +# Lifecycle: reset by the parser on each call, consumed by _ckipper_config_set, +# then left in place. They are not meant to be read by other modules. +typeset -g _CKIPPER_CONFIG_SET_ACCOUNT="" +typeset -g _CKIPPER_CONFIG_SET_KEY="" +typeset -g _CKIPPER_CONFIG_SET_VALUE="" +typeset -g _CKIPPER_CONFIG_SET_HAS_VALUE="false" + +# Capture a positional arg into the parser's scratch globals: first positional +# becomes the key, second becomes the value (and flips has_value to "true"). +# Extracted to keep _ckipper_config_set_parse_args at <=2 nesting levels. +# +# Args: $1 — the positional token. +# Returns: 0 always. +_ckipper_config_set_record_positional() { + if [[ -z "$_CKIPPER_CONFIG_SET_KEY" ]]; then + _CKIPPER_CONFIG_SET_KEY="$1" + return 0 + fi + _CKIPPER_CONFIG_SET_VALUE="$1" + _CKIPPER_CONFIG_SET_HAS_VALUE="true" +} + +# Parse the `ckipper config set` arg list into the module scratch globals. +# +# Args: $@ — the user's argv after `ckipper config set`. +# Returns: 0 on success; 1 on missing --account value or unknown flag. +# Errors (stderr): +# "Flag --account requires a value." — when --account has no following arg. +# "Unknown flag: ''" — when an unrecognized flag is encountered. +_ckipper_config_set_parse_args() { + _CKIPPER_CONFIG_SET_ACCOUNT="" + _CKIPPER_CONFIG_SET_KEY="" + _CKIPPER_CONFIG_SET_VALUE="" + _CKIPPER_CONFIG_SET_HAS_VALUE="false" + while (( $# > 0 )); do + case "$1" in + --account) + [[ -z "${2:-}" ]] && { echo "Flag --account requires a value." >&2; return 1; } + _CKIPPER_CONFIG_SET_ACCOUNT="$2"; shift 2 + ;; + --account=*) _CKIPPER_CONFIG_SET_ACCOUNT="${1#--account=}"; shift ;; + -*) echo "Unknown flag: '$1'" >&2; return 1 ;; + *) _ckipper_config_set_record_positional "$1"; shift ;; + esac + done +} + +# Set a configuration key. Routes to the global file or to the account +# preference store based on the schema scope. When the user omits the +# value argument, prompt them via _core_prompt_input. +# +# Args: +# $1..$N — `[--account ] []`. When is omitted +# the function prompts the user via _core_prompt_input, which +# reads from stdin in its CKIPPER_NO_GUM fallback path. +# +# Returns: 0 on success; 1 on unknown key, unknown flag, validation failure, +# missing --account on an account-scoped key, or unregistered account. +# +# Errors (stderr): +# "Usage: ckipper config set [--account ] [value]" — no key. +# "Unknown config key: ''" — when key is not in the schema. +# "Unknown flag: ''" — when an unrecognized flag is encountered. +# "Flag --account requires a value." — when --account has no following arg. +# "Account '' is not registered." — propagated from _core_account_dir. +# "Key '' requires --account." — propagated from _core_config_set when +# scope=account but no account name was supplied. +# "Invalid value for '': '' (expected )" — propagated +# from _core_config_validate on type mismatch. +_ckipper_config_set() { + _core_registry_check_version || return 1 + _ckipper_config_set_parse_args "$@" || return 1 + local account="$_CKIPPER_CONFIG_SET_ACCOUNT" + local key="$_CKIPPER_CONFIG_SET_KEY" + local value="$_CKIPPER_CONFIG_SET_VALUE" + _ckipper_config_set_validate_key "$key" || return 1 + [[ -n "$account" ]] && { _core_account_dir "$account" >/dev/null || return 1; } + if [[ "$_CKIPPER_CONFIG_SET_HAS_VALUE" != "true" ]]; then + local prompt_label="Value for $key (${_CKIPPER_SCHEMA_TYPE[$key]})" + value=$(_core_prompt_input "$prompt_label" "") + fi + _core_config_set "$key" "$value" "$account" +} diff --git a/lib/config/unset.zsh b/lib/config/unset.zsh new file mode 100644 index 0000000..00dc4cf --- /dev/null +++ b/lib/config/unset.zsh @@ -0,0 +1,61 @@ +#!/usr/bin/env zsh +# `ckipper config unset` handler. Thin wrapper over _core_config_unset that +# adds CLI-level argument parsing and schema-membership validation. + +# Verify the user supplied a key and that it exists in the schema. Surfaces +# both the usage line and the unknown-key error so the caller can treat the +# result as a single validation gate. +# +# Args: $1 — candidate key (may be empty). +# Returns: 0 if the key is non-empty and in the schema; 1 otherwise. +# Errors (stderr): +# "Usage: ckipper config unset [--account ] " — when no key. +# "Unknown config key: ''" — key not in schema. +_ckipper_config_unset_validate_key() { + local key="$1" + if [[ -z "$key" ]]; then + echo "Usage: ckipper config unset [--account ] " >&2 + return 1 + fi + if [[ -z "${_CKIPPER_SCHEMA_TYPE[$key]:-}" ]]; then + echo "Unknown config key: '$key'" >&2 + return 1 + fi +} + +# Remove the override for a configuration key, reverting future reads to the +# schema default (or to the global value when an account override is removed). +# +# Args: +# $1..$N — `[--account ] `. The flag and its value may appear in +# any order before the positional ; only one --account is read. +# +# Returns: 0 on success; 1 on missing key, unknown key, unknown flag, missing +# --account on an account-scoped key, or unregistered account. +# +# Errors (stderr): +# "Usage: ckipper config unset [--account ] " — when no key. +# "Unknown config key: ''" — when key is not in the schema. +# "Unknown flag: ''" — when an unrecognized flag is encountered. +# "Flag --account requires a value." — when --account has no following arg. +# "Account '' is not registered." — propagated from _core_account_dir. +# "Key '' requires --account." — propagated from _core_config_unset +# when scope=account but no account name was supplied. +_ckipper_config_unset() { + _core_registry_check_version || return 1 + local account="" key="" + while (( $# > 0 )); do + case "$1" in + --account) + [[ -z "${2:-}" ]] && { echo "Flag --account requires a value." >&2; return 1; } + account="$2"; shift 2 + ;; + --account=*) account="${1#--account=}"; shift ;; + -*) echo "Unknown flag: '$1'" >&2; return 1 ;; + *) key="$1"; shift ;; + esac + done + _ckipper_config_unset_validate_key "$key" || return 1 + [[ -n "$account" ]] && { _core_account_dir "$account" >/dev/null || return 1; } + _core_config_unset "$key" "$account" +} diff --git a/lib/core/config.zsh b/lib/core/config.zsh new file mode 100644 index 0000000..2f73441 --- /dev/null +++ b/lib/core/config.zsh @@ -0,0 +1,287 @@ +#!/usr/bin/env zsh +# Pure config get/set/unset/validate primitives. Operates on: +# - global file: $CKIPPER_DIR/docker/ckipper-config.zsh (zsh assignments) +# - per-account: $CKIPPER_REGISTRY (.accounts..preferences.) +# +# Schema source-of-truth: lib/core/schema.zsh — must be sourced before this. +# Functions here resolve the schema arrays at call time, never source-time. + +readonly _CORE_CONFIG_GLOBAL_PREFIX="CKIPPER_" + +# Translate a schema key to the global file's variable name. +# +# Args: $1 — schema key (e.g. "notify_bell") +# Returns: 0; prints "CKIPPER_NOTIFY_BELL". +_core_config_global_var() { + local key="$1" + echo "${_CORE_CONFIG_GLOBAL_PREFIX}${(U)key}" +} + +# Path to the global config file. +# +# Returns: 0; prints absolute path under $CKIPPER_DIR/docker/. +_core_config_global_file() { + echo "${CKIPPER_DIR:-$HOME/.ckipper}/docker/ckipper-config.zsh" +} + +# Read a global value from the config file without sourcing it. +# int_array values stored as zsh array literals (KEY=(a b c)) are returned as +# CSV (a,b,c) so callers see a consistent shape regardless of on-disk form. +# +# Args: $1 — schema key +# Returns: 0; prints the assigned value (quotes stripped, array→CSV) or empty +# string if unset. +_core_config_read_global() { + local key="$1" + local var + var=$(_core_config_global_var "$key") + local file + file=$(_core_config_global_file) + [[ -f "$file" ]] || { + echo "" + return 0 + } + local type="${_CKIPPER_SCHEMA_TYPE[$key]:-}" + if [[ "$type" == "int_array" ]]; then + awk -v v="$var" -F= ' + $1 == v { + sub(/^[^=]+=/, "") + gsub(/^\(|\)$/, "") + gsub(/^[ \t]+|[ \t]+$/, "") + gsub(/[ \t]+/, ",") + print + exit + } + ' "$file" + return 0 + fi + awk -v v="$var" -F= '$1 == v { sub(/^[^=]+=/, ""); gsub(/^"|"$/, ""); print; exit }' "$file" +} + +# Read an account preference from the registry. +# +# Args: $1 — key, $2 — account name +# Returns: 0; prints the stored value or empty string if unset. +_core_config_read_account() { + local key="$1" account="$2" + [[ -f "$CKIPPER_REGISTRY" ]] || { + echo "" + return 0 + } + jq -r --arg n "$account" --arg k "$key" ' + if (.accounts[$n].preferences | has($k)) + then .accounts[$n].preferences[$k] | tostring + else "" + end + ' "$CKIPPER_REGISTRY" +} + +# Resolve effective value: account override → global → schema default. +# +# Args: $1 — key, $2 — (optional) account name +# Returns: 0; prints the resolved value (may be empty if default is empty). +_core_config_get() { + local key="$1" account="${2:-}" + local val="" + if [[ -n "$account" && "${_CKIPPER_SCHEMA_SCOPE[$key]}" == "account" ]]; then + val=$(_core_config_read_account "$key" "$account") + [[ -n "$val" ]] && { + echo "$val" + return 0 + } + fi + val=$(_core_config_read_global "$key") + [[ -n "$val" ]] && { + echo "$val" + return 0 + } + echo "${_CKIPPER_SCHEMA_DEFAULT[$key]}" +} + +# Validate a value against the schema type for a key. +# +# Args: $1 — key, $2 — value +# Returns: 0 if valid; 1 on unknown key or type mismatch. +# Errors (stderr): +# "Unknown config key: ''" — when key not in schema. +# "Invalid value for '': '' (expected )" — on type mismatch. +# "Invalid value for '': contains shell-breakout characters..." — when a +# string/path value contains `"`, `\`, `` ` ``, or `$(` (these would +# execute as code on the next shell start when ckipper-config.zsh is sourced). +_core_config_validate() { + local key="$1" value="$2" + local type="${_CKIPPER_SCHEMA_TYPE[$key]:-}" + if [[ -z "$type" ]]; then + echo "Unknown config key: '$key'" >&2 + return 1 + fi + case "$type" in + bool) + [[ "$value" == "true" || "$value" == "false" ]] && return 0 + ;; + int) + [[ "$value" =~ ^[0-9]+$ ]] && return 0 + ;; + int_array) + [[ "$value" =~ ^[0-9]+(,[0-9]+)*$ ]] && return 0 + ;; + string | path) + _core_config_reject_shell_breakout "$key" "$value" || return 1 + return 0 + ;; + esac + echo "Invalid value for '$key': '$value' (expected $type)" >&2 + return 1 +} + +# Reject string/path values that would inject shell code when the global +# config file is sourced. The blocked set: `"` (closes the assignment), +# `\` (escape sequences that can break out), `` ` `` (legacy command +# substitution), `$(` (modern command substitution). `$VAR` and `${VAR}` +# are intentionally allowed — the schema defaults rely on `$HOME`. +# +# Args: $1 — key (for the error message), $2 — candidate value. +# Returns: 0 if value is shell-safe; 1 with stderr error otherwise. +# Errors (stderr): "Invalid value for '': contains shell-breakout characters..." +_core_config_reject_shell_breakout() { + local key="$1" value="$2" + if [[ "$value" == *'"'* || "$value" == *'\'* \ + || "$value" == *'`'* || "$value" == *'$('* ]]; then + echo "Invalid value for '$key': contains shell-breakout characters (\", \\, \`, \$()." >&2 + return 1 + fi + return 0 +} + +# Format a global config-file line for a given key/value pair, picking the +# right zsh syntax based on schema type. int_array values land as zsh array +# literals so a `for x in "${KEY[@]}"` consumer sees real elements; all other +# types land as quoted scalars. +# +# Args: $1 — variable name (e.g. CKIPPER_PORTS), $2 — value, $3 — schema type +# Returns: 0; prints the formatted assignment line. +_core_config_format_line() { + local var="$1" value="$2" type="$3" + if [[ "$type" == "int_array" ]]; then + echo "${var}=(${value//,/ })" + return 0 + fi + echo "${var}=\"${value}\"" +} + +# Write a global key into the config file, replacing any existing assignment. +# Idempotent: existing CKIPPER_= line is rewritten in place; absent keys +# are appended. The on-disk form depends on the schema type — see +# _core_config_format_line. +# +# Args: $1 — key, $2 — value +# Returns: 0 on success; 1 on validation failure. +_core_config_write_global() { + local key="$1" value="$2" + _core_config_validate "$key" "$value" || return 1 + local var + var=$(_core_config_global_var "$key") + local type="${_CKIPPER_SCHEMA_TYPE[$key]:-}" + local line + line=$(_core_config_format_line "$var" "$value" "$type") + local file + file=$(_core_config_global_file) + mkdir -p "${file:h}" + [[ -f "$file" ]] || : >"$file" + local tmp + tmp=$(mktemp "${file}.XXXXXX") + awk -v v="$var" -v repl="$line" -F= ' + $1 == v { print repl; found=1; next } + { print } + END { if (!found) print repl } + ' "$file" >"$tmp" && mv "$tmp" "$file" +} + +# Write an account preference into the registry. Coerces "true"/"false"/numeric +# strings to native JSON types so consumers don't see stringified bools. +# Routes through _core_registry_update so concurrent writers cannot lose +# updates and the file's 0600 perms are re-asserted on every successful write. +# +# Args: $1 — key, $2 — value, $3 — account name +# Returns: 0 on success; 1 on validation, lock-acquisition, or jq/write failure. +_core_config_write_account() { + local key="$1" value="$2" account="$3" + _core_config_validate "$key" "$value" || return 1 + _core_registry_update ' + .accounts[$n].preferences[$k] = ( + if $v == "true" then true + elif $v == "false" then false + elif ($v | test("^[0-9]+$")) then ($v | tonumber) + else $v end + ) + ' --arg n "$account" --arg k "$key" --arg v "$value" +} + +# Public set — routes to global or account-scoped write per the schema. +# +# Args: $1 — key, $2 — value, $3 — (optional) account name +# Returns: 0 on success; 1 on validation/write failure or missing account +# for an account-scoped key. +# Errors (stderr): "Key '' requires --account." — when scope=account +# but no account name was supplied. +_core_config_set() { + local key="$1" value="$2" account="${3:-}" + local scope="${_CKIPPER_SCHEMA_SCOPE[$key]:-}" + if [[ "$scope" == "account" ]]; then + [[ -z "$account" ]] && { + echo "Key '$key' requires --account." >&2 + return 1 + } + _core_config_write_account "$key" "$value" "$account" + else + _core_config_write_global "$key" "$value" + fi +} + +# Remove a global override, reverting future reads to the schema default. +# +# Args: $1 — key +# Returns: 0 always (no-op when file is absent or line is missing). +_core_config_unset_global() { + local key="$1" + local var + var=$(_core_config_global_var "$key") + local file + file=$(_core_config_global_file) + [[ -f "$file" ]] || return 0 + local tmp + tmp=$(mktemp "${file}.XXXXXX") + awk -v v="$var" -F= '$1 != v' "$file" >"$tmp" && mv "$tmp" "$file" +} + +# Remove an account preference override. Routes through _core_registry_update +# so concurrent writers cannot lose updates. +# +# Args: $1 — key, $2 — account name +# Returns: 0 when registry is absent (no-op) or the lock-protected delete +# succeeds; 1 on lock-acquisition or jq/write failure. +_core_config_unset_account() { + local key="$1" account="$2" + [[ -f "$CKIPPER_REGISTRY" ]] || return 0 + _core_registry_update 'del(.accounts[$n].preferences[$k])' \ + --arg n "$account" --arg k "$key" +} + +# Public unset — routes to global or account-scoped removal per the schema. +# +# Args: $1 — key, $2 — (optional) account name +# Returns: 0 on success; 1 if scope=account but no account name supplied. +# Errors (stderr): "Key '' requires --account." — see above. +_core_config_unset() { + local key="$1" account="${2:-}" + local scope="${_CKIPPER_SCHEMA_SCOPE[$key]:-global}" + if [[ "$scope" == "account" ]]; then + [[ -z "$account" ]] && { + echo "Key '$key' requires --account." >&2 + return 1 + } + _core_config_unset_account "$key" "$account" + else + _core_config_unset_global "$key" + fi +} diff --git a/lib/core/config_test.bats b/lib/core/config_test.bats new file mode 100644 index 0000000..6d9dd6d --- /dev/null +++ b/lib/core/config_test.bats @@ -0,0 +1,271 @@ +#!/usr/bin/env bats +# Module-level tests for lib/core/config.zsh. +# Verifies _core_config_get/set/unset/validate primitives against the schema +# from lib/core/schema.zsh. config.zsh is zsh-only, so each assertion spawns +# a zsh subshell that sources schema then config and runs the function under +# test (matching the pattern in registry_test.bats and schema_test.bats). + +load "${BATS_TEST_DIRNAME}/../../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env + mkdir -p "$CKIPPER_DIR/docker" + : >"$CKIPPER_DIR/docker/ckipper-config.zsh" + # v2 accounts.json fixture with one `work` account having empty preferences. + cat >"$CKIPPER_REGISTRY" <<'JSON' +{"version":2,"default":"work","accounts":{"work":{"config_dir":"/x","keychain_service":null,"registered_at":"t","preferences":{}}}} +JSON +} + +teardown() { + teardown_isolated_env +} + +# Helper: source schema.zsh + registry.zsh + config.zsh in zsh and run zsh_cmd. +# registry.zsh is sourced because account-scoped writes in config.zsh now route +# through _core_registry_update for lock-protected, atomic updates (I-1 fix). +_run_config() { + local zsh_cmd="$1" + run env HOME="$TMP_HOME" \ + CKIPPER_DIR="$CKIPPER_DIR" \ + CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + PATH="$PATH" \ + zsh -c " + source \"$REPO_ROOT/lib/core/schema.zsh\" + source \"$REPO_ROOT/lib/core/registry.zsh\" + source \"$REPO_ROOT/lib/core/config.zsh\" + $zsh_cmd + " +} + +@test "_core_config_get returns schema default when key is unset" { + _run_config "_core_config_get notify_bell" + + [ "$status" -eq 0 ] + [ "$output" = "true" ] +} + +@test "_core_config_get returns global value when set" { + echo 'CKIPPER_NOTIFY_BELL="false"' >"$CKIPPER_DIR/docker/ckipper-config.zsh" + + _run_config "_core_config_get notify_bell" + + [ "$status" -eq 0 ] + [ "$output" = "false" ] +} + +@test "_core_config_get returns account override when set" { + _run_config "_core_config_set always_docker true work && _core_config_get always_docker work" + + [ "$status" -eq 0 ] + [ "$output" = "true" ] +} + +@test "_core_config_set writes global key idempotently" { + _run_config "_core_config_set notify_bell false && _core_config_set notify_bell false" + + [ "$status" -eq 0 ] + local count + count=$(grep -c '^CKIPPER_NOTIFY_BELL=' "$CKIPPER_DIR/docker/ckipper-config.zsh") + [ "$count" = "1" ] +} + +@test "_core_config_validate accepts valid bool" { + _run_config "_core_config_validate notify_bell true" + [ "$status" -eq 0 ] + + _run_config "_core_config_validate notify_bell false" + [ "$status" -eq 0 ] +} + +@test "_core_config_validate rejects invalid bool" { + _run_config "_core_config_validate notify_bell yes" + + [ "$status" -ne 0 ] +} + +@test "_core_config_validate rejects unknown key" { + _run_config "_core_config_validate not_a_real_key true" + + [ "$status" -ne 0 ] +} + +@test "_core_config_unset removes the override and returns default" { + _run_config "_core_config_set notify_bell false && _core_config_unset notify_bell && _core_config_get notify_bell" + + [ "$status" -eq 0 ] + [ "$output" = "true" ] +} + +@test "_core_config_get returns false when set false on per-account key with default true" { + _run_config "_core_config_set ssh_forward false work && _core_config_get ssh_forward work" + + [ "$status" -eq 0 ] + [ "$output" = "false" ] +} + +@test "_core_config_validate accepts integer-array values like \"3000\" and \"3000,3030,6006\"" { + _run_config "_core_config_validate ports 3000" + [ "$status" -eq 0 ] + + _run_config "_core_config_validate ports 3000,3030,6006" + [ "$status" -eq 0 ] +} + +@test "_core_config_validate rejects malformed int_array" { + _run_config "_core_config_validate ports abc" + [ "$status" -ne 0 ] + + _run_config "_core_config_validate ports 3000,abc" + [ "$status" -ne 0 ] + + _run_config '_core_config_validate ports ""' + [ "$status" -ne 0 ] +} + +@test "_core_config_validate accepts string and path values trivially" { + _run_config '_core_config_validate default_branch "main"' + [ "$status" -eq 0 ] + + _run_config '_core_config_validate default_branch ""' + [ "$status" -eq 0 ] + + _run_config '_core_config_validate projects_dir "/some/path"' + [ "$status" -eq 0 ] +} + +@test "_core_config_validate allows variable-style \$ in path values" { + # Users routinely set projects_dir to e.g. \$HOME/Developer; the global file + # is sourced as zsh, so the expansion happens at shell-startup time. + # The escaped \$ ensures the validator sees the literal "$HOME/..." string + # (not pre-expanded by the zsh -c host) — the whole point of this test. + _run_config '_core_config_validate projects_dir "\$HOME/Developer"' + [ "$status" -eq 0 ] + + _run_config '_core_config_validate projects_dir "\${HOME}/code"' + [ "$status" -eq 0 ] +} + +@test "_core_config_validate rejects shell-breakout chars in string/path values" { + # The global config file is sourced by zsh, so these would otherwise execute + # arbitrary code on every shell start. Each character class is a distinct + # breakout vector; assert each is rejected. + _run_config '_core_config_validate projects_dir "evil\"; rm -rf /; echo \""' + [ "$status" -ne 0 ] + + _run_config '_core_config_validate projects_dir "\`whoami\`"' + [ "$status" -ne 0 ] + + _run_config '_core_config_validate projects_dir "\$(whoami)"' + [ "$status" -ne 0 ] + + _run_config '_core_config_validate projects_dir "back\\\\slash"' + [ "$status" -ne 0 ] + + _run_config '_core_config_validate dep_install_cmd "npm install \$(echo bad)"' + [ "$status" -ne 0 ] +} + +@test "_core_config_set persists no shell injection through the global file" { + # End-to-end: a quote-breakout value passed to _core_config_set must NOT + # land in the sourced config file. Verifies the validator gates the writer. + _run_config '_core_config_set projects_dir "evil\"; export INJECTED=1; echo \""' + + [ "$status" -ne 0 ] + run grep -F 'INJECTED=1' "$CKIPPER_DIR/docker/ckipper-config.zsh" + [ "$status" -ne 0 ] +} + +@test "_core_config_set rejects account-scoped key with no account argument" { + _run_config "_core_config_set always_docker true" + + [ "$status" -ne 0 ] + [[ "$output" == *"requires --account"* ]] +} + +@test "_core_config_unset for account scope removes the override and returns default" { + _run_config "_core_config_set ssh_forward false work && _core_config_get ssh_forward work && _core_config_unset ssh_forward work && _core_config_get ssh_forward work" + + [ "$status" -eq 0 ] + [ "${lines[0]}" = "false" ] + [ "${lines[1]}" = "true" ] +} + +# I-1 regression: account-scoped registry writes must go through the locked +# update primitive (_core_registry_update). Stubbing _core_registry_update to +# leave a marker file proves the routing — if a writer bypasses the lock and +# does its own `mktemp + mv`, the stub never runs and the marker is missing. + +@test "_core_config_set account-scoped routes through _core_registry_update" { + local marker="$CKIPPER_DIR/_registry_update_called" + + _run_config " + _core_registry_update() { : > '$marker'; return 0; } + _core_config_set always_docker true work + " + + [ "$status" -eq 0 ] + [ -f "$marker" ] +} + +@test "_core_config_unset account-scoped routes through _core_registry_update" { + local marker="$CKIPPER_DIR/_registry_update_called" + + _run_config " + _core_registry_update() { : > '$marker'; return 0; } + _core_config_unset always_docker work + " + + [ "$status" -eq 0 ] + [ -f "$marker" ] +} + +# Regression: _core_config_write_global emitted `CKIPPER_PORTS="3000,3030,6006"` +# regardless of schema type, so an int_array key got rewritten as a quoted scalar. +# When the file was sourced, CKIPPER_PORTS became a string, breaking +# `for port in "${CKIPPER_PORTS[@]}"` in lib/worktree/ports.zsh. The writer must +# emit zsh array literal form for int_array types so the value round-trips +# correctly through both the file and the reader. + +@test "_core_config_set writes int_array as zsh array literal (not quoted scalar)" { + _run_config "_core_config_set ports 3000,3030,6006" + + [ "$status" -eq 0 ] + run grep -E '^CKIPPER_PORTS=' "$CKIPPER_DIR/docker/ckipper-config.zsh" + [ "$status" -eq 0 ] + [[ "$output" == 'CKIPPER_PORTS=(3000 3030 6006)' ]] +} + +@test "_core_config_get round-trips an int_array as CSV" { + _run_config "_core_config_set ports 3000,3030,6006 && _core_config_get ports" + + [ "$status" -eq 0 ] + [ "$output" = "3000,3030,6006" ] +} + +@test "sourcing the written config file populates CKIPPER_PORTS as a real array" { + _run_config ' + _core_config_set ports 3000,3030,6006 || exit 1 + unset CKIPPER_PORTS + source "$CKIPPER_DIR/docker/ckipper-config.zsh" + # Assert array shape: 3 elements, first is 3000. + (( ${#CKIPPER_PORTS[@]} == 3 )) || { echo "expected 3 elements, got ${#CKIPPER_PORTS[@]}" >&2; exit 2; } + [[ "${CKIPPER_PORTS[1]}" == "3000" ]] || { echo "expected first=3000, got ${CKIPPER_PORTS[1]}" >&2; exit 3; } + [[ "${CKIPPER_PORTS[3]}" == "6006" ]] || { echo "expected third=6006, got ${CKIPPER_PORTS[3]}" >&2; exit 4; } + ' + + [ "$status" -eq 0 ] +} + +@test "_core_config_set rewrites a pre-existing array literal in place (idempotent)" { + echo 'CKIPPER_PORTS=(3000)' > "$CKIPPER_DIR/docker/ckipper-config.zsh" + + _run_config "_core_config_set ports 4000,5000" + + [ "$status" -eq 0 ] + local count + count=$(grep -c '^CKIPPER_PORTS=' "$CKIPPER_DIR/docker/ckipper-config.zsh") + [ "$count" = "1" ] + run grep -E '^CKIPPER_PORTS=' "$CKIPPER_DIR/docker/ckipper-config.zsh" + [[ "$output" == 'CKIPPER_PORTS=(4000 5000)' ]] +} diff --git a/lib/core/fuzzy.zsh b/lib/core/fuzzy.zsh new file mode 100644 index 0000000..6dac59f --- /dev/null +++ b/lib/core/fuzzy.zsh @@ -0,0 +1,90 @@ +#!/usr/bin/env zsh +# Fuzzy-suggest helper. Pure functions: no globals read or written. +# +# Used by ckipper dispatchers to suggest the closest known subcommand +# when the user types something unrecognised. + +readonly _CORE_FUZZY_DISTANCE_THRESHOLD=2 + +# Compute Levenshtein edit distance between two strings. +# +# Args: +# $1 — string A +# $2 — string B +# +# Returns: 0 always. Prints the distance (non-negative integer) to stdout. +_core_fuzzy_levenshtein() { + local a="$1" b="$2" + local la=${#a} lb=${#b} + (( la == 0 )) && { echo "$lb"; return 0; } + (( lb == 0 )) && { echo "$la"; return 0; } + + local -a prev curr + local i j cost del ins sub + for (( i = 0; i <= lb; i++ )); do + prev[$((i + 1))]=$i + done + for (( i = 1; i <= la; i++ )); do + curr[1]=$i + for (( j = 1; j <= lb; j++ )); do + cost=1 + [[ "${a[i]}" == "${b[j]}" ]] && cost=0 + del=$(( prev[j + 1] + 1 )) + ins=$(( curr[j] + 1 )) + sub=$(( prev[j] + cost )) + curr[$((j + 1))]=$(( del < ins ? (del < sub ? del : sub) : (ins < sub ? ins : sub) )) + done + prev=("${curr[@]}") + done + echo "${prev[lb + 1]}" +} + +# Find the closest candidate to the input within the distance threshold. +# +# Walks every candidate, keeps the one with the smallest distance ≤ threshold, +# and prints it. Exact matches return distance 0 and win automatically. Ties go +# to the first-seen candidate (stable on insertion order). +# +# Args: +# $1 — input token (the unknown subcommand the user typed) +# $2..$N — candidate list +# +# Returns: 0 always. Prints the closest candidate, or empty string if no +# candidate is within the threshold (or the candidate list is empty). +_core_fuzzy_suggest() { + local input="$1" + shift + local best="" best_dist=$(( _CORE_FUZZY_DISTANCE_THRESHOLD + 1 )) + local candidate dist + for candidate in "$@"; do + dist=$(_core_fuzzy_levenshtein "$input" "$candidate") + if (( dist <= _CORE_FUZZY_DISTANCE_THRESHOLD && dist < best_dist )); then + best="$candidate" + best_dist="$dist" + fi + done + echo "$best" +} + +# Print an "Unknown command" message with a closest-match suggestion (if any) +# followed by a help-pointer line. All output goes to stderr — this enforces +# the unknown-command stderr contract for every dispatch tier. +# +# Args: +# $1 — the unknown command token the user typed +# $2 — help pointer text (e.g. "Run 'ckipper help' for available commands.") +# $3..$N — known-command candidate list +# +# Returns: 0 always. +_core_unknown_command() { + local cmd="$1" help_text="$2" + shift 2 + local suggestion + suggestion=$(_core_fuzzy_suggest "$cmd" "$@") + if [[ -n "$suggestion" ]]; then + echo "Unknown command: '$cmd'. Did you mean: '$suggestion'?" >&2 + else + echo "Unknown command: '$cmd'." >&2 + fi + echo "$help_text" >&2 +} diff --git a/lib/core/fuzzy_test.bats b/lib/core/fuzzy_test.bats new file mode 100644 index 0000000..79fe47c --- /dev/null +++ b/lib/core/fuzzy_test.bats @@ -0,0 +1,56 @@ +#!/usr/bin/env bats +# Module-level tests for lib/core/fuzzy.zsh. +# Tests _core_fuzzy_suggest observable behaviour. + +load "${BATS_TEST_DIRNAME}/../../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env +} + +teardown() { + teardown_isolated_env +} + +# Helper: source fuzzy.zsh and run a command in a zsh subshell. +_run_fuzzy() { + local zsh_cmd="$1" + run env HOME="$TMP_HOME" PATH="$PATH" \ + zsh -c "source \"$REPO_ROOT/lib/core/fuzzy.zsh\"; $zsh_cmd" +} + +@test "_core_fuzzy_suggest returns exact match unchanged" { + _run_fuzzy "_core_fuzzy_suggest list list add remove" + [ "$status" -eq 0 ] + [ "$output" = "list" ] +} + +@test "_core_fuzzy_suggest returns close single-letter typo" { + _run_fuzzy "_core_fuzzy_suggest lst list add remove" + [ "$status" -eq 0 ] + [ "$output" = "list" ] +} + +@test "_core_fuzzy_suggest returns transposition" { + _run_fuzzy "_core_fuzzy_suggest lsit list add remove" + [ "$status" -eq 0 ] + [ "$output" = "list" ] +} + +@test "_core_fuzzy_suggest returns nothing for far-off input" { + _run_fuzzy "_core_fuzzy_suggest xyzabc list add remove" + [ "$status" -eq 0 ] + [ -z "$output" ] +} + +@test "_core_fuzzy_suggest picks the closest of multiple candidates" { + _run_fuzzy "_core_fuzzy_suggest ad add default doctor" + [ "$status" -eq 0 ] + [ "$output" = "add" ] +} + +@test "_core_fuzzy_suggest handles empty candidate list" { + _run_fuzzy "_core_fuzzy_suggest anything" + [ "$status" -eq 0 ] + [ -z "$output" ] +} diff --git a/lib/core/help.zsh b/lib/core/help.zsh new file mode 100644 index 0000000..d74694c --- /dev/null +++ b/lib/core/help.zsh @@ -0,0 +1,27 @@ +#!/usr/bin/env zsh +# Uniform help-page renderer for ckipper subcommand --help output. +# +# Wraps _core_style_header (from lib/core/style.zsh) so every help page renders +# with the same divider+title+divider chrome, then prints body lines verbatim. +# Body content (Synopsis / Description / Args / Examples / etc.) is the +# caller's responsibility; this module only owns the chrome and the line-wise +# emission contract. + +# Render a help page: styled header followed by body lines. +# +# Args: $1 — page title (passed straight to _core_style_header). +# $2..$N — body lines; each is printed verbatim on its own line. +# Returns: 0 always. +# +# Example: +# _core_help_render "ckipper account add" \ +# "Synopsis: ckipper account add " \ +# "Description: register a new isolated account." +_core_help_render() { + local title="$1" + shift 2>/dev/null + _core_style_header "$title" + (($# == 0)) && return 0 + printf '%s\n' "$@" + echo "" +} diff --git a/lib/core/help_test.bats b/lib/core/help_test.bats new file mode 100644 index 0000000..dcde477 --- /dev/null +++ b/lib/core/help_test.bats @@ -0,0 +1,62 @@ +#!/usr/bin/env bats +# Module-level tests for lib/core/help.zsh. +# help.zsh wraps _core_style_header (from style.zsh) and prints body lines. +# Like style.zsh, it relies on zsh-only constructs upstream, so every assertion +# spawns a zsh subshell that sources both files (matching style_test.bats). + +load "${BATS_TEST_DIRNAME}/../../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env +} + +teardown() { + teardown_isolated_env +} + +# Helper: source style.zsh + help.zsh in zsh and run zsh_cmd. +# Forwards CKIPPER_FORCE_COLOR / NO_COLOR explicitly so the upstream color +# decision in style.zsh stays deterministic under bats `run` (non-TTY). +_run_help() { + local zsh_cmd="$1" + run env CKIPPER_FORCE_COLOR="${CKIPPER_FORCE_COLOR:-}" \ + NO_COLOR="${NO_COLOR:-}" \ + PATH="$PATH" \ + zsh -c "source \"$REPO_ROOT/lib/core/style.zsh\"; source \"$REPO_ROOT/lib/core/help.zsh\"; $zsh_cmd" +} + +@test "_core_help_render emits Synopsis / Description / Args sections plus title" { + _run_help '_core_help_render "ckipper foo bar" "Synopsis: ck foo" "Description: does the foo." "Args:" " the baz"' + + [ "$status" -eq 0 ] + [[ "$output" == *"ckipper foo bar"* ]] + [[ "$output" == *"Synopsis"* ]] + [[ "$output" == *"Description"* ]] + [[ "$output" == *"Args"* ]] +} + +@test "_core_help_render handles empty body gracefully" { + _run_help '_core_help_render "ckipper foo"' + + [ "$status" -eq 0 ] + [[ "$output" == *"ckipper foo"* ]] +} + +@test "_core_help_render preserves leading whitespace in body lines" { + _run_help '_core_help_render "ckipper foo" " --flag description"' + + [ "$status" -eq 0 ] + [[ "$output" == *" --flag description"* ]] +} + +@test "_core_help_render under NO_COLOR strips ANSI escape sequences" { + unset CKIPPER_FORCE_COLOR + export NO_COLOR=1 + + _run_help '_core_help_render "ckipper foo" "body line"' + + [ "$status" -eq 0 ] + [[ "$output" == *"ckipper foo"* ]] + [[ "$output" == *"body line"* ]] + [[ "$output" != *$'\x1b['* ]] +} diff --git a/lib/core/keychain.zsh b/lib/core/keychain.zsh new file mode 100644 index 0000000..4ed1017 --- /dev/null +++ b/lib/core/keychain.zsh @@ -0,0 +1,127 @@ +#!/usr/bin/env zsh +# Shared macOS Keychain and Claude process utilities. + +readonly _CORE_KEYCHAIN_TIMEOUT_SECONDS=10 + +# Validate a keychain_service name before passing to `security`. +# Accepts "Claude Code-credentials" optionally followed by "-". +# +# Args: +# $1 — service name to validate +# +# Returns: +# 0 if valid; non-zero if empty or wrong shape. +_core_keychain_validate() { + local svc="$1" + [[ -z "$svc" ]] && return 1 + [[ "$svc" =~ ^Claude\ Code-credentials(-[a-f0-9]+)?$ ]] +} + +# Detect the best available timeout command for wrapping keychain access. +# +# Returns: +# 0 always; prints the timeout command prefix to stdout (empty if none found). +_core_keychain_detect_timeout_cmd() { + if command -v timeout >/dev/null 2>&1; then + printf 'timeout %s' "$_CORE_KEYCHAIN_TIMEOUT_SECONDS" + elif command -v gtimeout >/dev/null 2>&1; then + printf 'gtimeout %s' "$_CORE_KEYCHAIN_TIMEOUT_SECONDS" + fi +} + +# Dump the macOS Keychain using a timeout wrapper, then filter for Claude entries. +# +# Args: +# $1 — timeout command prefix (e.g. "timeout 10"), or empty for no timeout +# +# Returns: +# 0 on success with Claude service names printed to stdout; 1 on keychain error. +_core_keychain_snapshot_with_timeout() { + local timeout_cmd="$1" + local out + if ! out=$($timeout_cmd security dump-keychain 2>/dev/null); then + echo "Warning: Keychain may be locked or slow. Unlock it (Keychain Access > File > Unlock) and retry." >&2 + return 1 + fi + printf '%s\n' "$out" | \ + awk -F'"' '/"svce"="Claude Code-credentials/ {print $4}' | \ + sort -u +} + +# Dump the macOS Keychain without a timeout wrapper, then filter for Claude entries. +# +# Returns: +# 0 on success with Claude service names printed to stdout; 1 on keychain error. +# +# Errors (stderr): +# "Warning: 'security dump-keychain' failed..." — when keychain dump exits non-zero. +_core_keychain_snapshot_fallback() { + local out + # No timeout available — run without. If keychain is locked the GUI + # password prompt will block this, which is a fine failure mode. + if ! out=$(security dump-keychain 2>/dev/null); then + echo "Warning: 'security dump-keychain' failed. Keychain may be locked." >&2 + return 1 + fi + printf '%s\n' "$out" | \ + awk -F'"' '/"svce"="Claude Code-credentials/ {print $4}' | \ + sort -u +} + +# Return service names of all "Claude Code-credentials*" Keychain entries, sorted. +# macOS only — returns 0 immediately on other platforms. +# +# Returns: +# 0 on success; 1 if keychain is locked or unavailable. +# +# Errors (stderr): +# Warning messages when the keychain is slow, locked, or dump fails. +_core_keychain_snapshot() { + [[ "${_CKIPPER_TEST_OSTYPE:-$OSTYPE}" != darwin* ]] && return 0 + + # Pick a timeout binary if available (macOS doesn't ship one; gtimeout from + # coreutils is the typical brew install). Fall through to no timeout if neither + # is present — better than failing with a misleading "keychain locked" error. + local timeout_cmd + timeout_cmd=$(_core_keychain_detect_timeout_cmd) + + if [[ -n "$timeout_cmd" ]]; then + _core_keychain_snapshot_with_timeout "$timeout_cmd" + else + _core_keychain_snapshot_fallback + fi +} + +# Detect running Claude processes that would conflict with destructive operations. +# Matches: 'claude' CLI (basename), 'Claude' (Claude.app main process). Avoids matching +# vim files named 'claude-*', tmux sessions, or Claude Helper subprocesses (the parent +# Claude.app being killed will cascade to those). +# +# Returns: +# 0 always; matching processes printed to stdout. +_core_running_claude_processes() { + pgrep -lx claude 2>/dev/null + pgrep -lx Claude 2>/dev/null +} + +# Refuse with a clear message if any Claude process is running. +# +# Returns: +# 0 if no Claude processes found (or CKIPPER_FORCE=1 is set); 1 otherwise. +# +# Errors (stderr): +# "Error: Claude process(es) detected..." — when running processes found. +_core_assert_no_running_claude() { + local found + found=$(_core_running_claude_processes) + [[ -z "$found" ]] && return 0 + + echo "Error: Claude process(es) detected. Quit them first:" >&2 + echo "$found" | sed 's/^/ /' >&2 + echo "(Set CKIPPER_FORCE=1 to bypass this check, but expect inconsistent state.)" >&2 + if [[ "$CKIPPER_FORCE" == "1" ]]; then + echo "CKIPPER_FORCE=1 set — proceeding despite running Claude." >&2 + return 0 + fi + return 1 +} diff --git a/lib/core/keychain_test.bats b/lib/core/keychain_test.bats new file mode 100644 index 0000000..ce21ed4 --- /dev/null +++ b/lib/core/keychain_test.bats @@ -0,0 +1,59 @@ +#!/usr/bin/env bats +# Module-level tests for lib/core/keychain.zsh. +# Covers validate, snapshot_with_timeout, snapshot_fallback, running_claude_processes, +# and assert_no_running_claude. + +load "${BATS_TEST_DIRNAME}/../../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env + export _CKIPPER_TEST_OSTYPE="darwin19.0" +} + +teardown() { + teardown_isolated_env +} + +# Helper: source keychain.zsh and run zsh_cmd. +_run_keychain() { + local zsh_cmd="$1" + run env HOME="$TMP_HOME" \ + _CKIPPER_TEST_OSTYPE="${_CKIPPER_TEST_OSTYPE:-darwin19.0}" \ + CKIPPER_FORCE="${CKIPPER_FORCE:-0}" \ + PGREP_STUB_MATCH="${PGREP_STUB_MATCH:-0}" \ + SECURITY_STUB_DUMP="${SECURITY_STUB_DUMP:-}" \ + PATH="$PATH" \ + zsh -c "source \"$REPO_ROOT/lib/core/keychain.zsh\"; $zsh_cmd" +} + +@test "_core_keychain_validate accepts a valid service name" { + _run_keychain '_core_keychain_validate "Claude Code-credentials"' + + [ "$status" -eq 0 ] +} + +@test "_core_keychain_validate accepts a service name with hex suffix" { + _run_keychain '_core_keychain_validate "Claude Code-credentials-abc123"' + + [ "$status" -eq 0 ] +} + +@test "_core_keychain_validate rejects an empty service name" { + _run_keychain '_core_keychain_validate ""' + + [ "$status" -ne 0 ] +} + +@test "_core_keychain_validate rejects a name with wrong prefix" { + _run_keychain '_core_keychain_validate "NotClaude-credentials"' + + [ "$status" -ne 0 ] +} + +@test "_core_assert_no_running_claude passes when no Claude processes are running" { + export PGREP_STUB_MATCH=0 + + _run_keychain "_core_assert_no_running_claude" + + [ "$status" -eq 0 ] +} diff --git a/lib/core/prompt.zsh b/lib/core/prompt.zsh new file mode 100644 index 0000000..f6ccf9f --- /dev/null +++ b/lib/core/prompt.zsh @@ -0,0 +1,111 @@ +#!/usr/bin/env zsh +# Interactive prompt helpers for ckipper. Wraps gum (charmbracelet/gum) when +# available, otherwise falls back to pure-zsh `read` prompts. +# +# All public helpers honor CKIPPER_NO_GUM: setting it to "1" forces the +# pure-zsh fallback path so non-TTY callers and tests can pin behavior +# regardless of whether gum is installed on the host. +# +# Prompt labels are written to stderr (via `read "?prompt"`); the chosen value +# is written to stdout. Callers must capture stdout via command substitution +# and let stderr flow to the terminal. + +# Sentinel value for CKIPPER_NO_GUM that disables gum even when installed. +readonly _CORE_PROMPT_NO_GUM_SENTINEL="1" + +# Decide whether to use gum for the next prompt. +# +# Returns: 0 when gum should be used (CKIPPER_NO_GUM != "1" AND `gum` is on +# PATH); 1 otherwise. +_core_prompt_use_gum() { + [[ "$CKIPPER_NO_GUM" == "$_CORE_PROMPT_NO_GUM_SENTINEL" ]] && return 1 + command -v gum >/dev/null 2>&1 +} + +# Prompt for a free-form string. Returns the entered value, or the supplied +# default when input is empty. +# +# Args: $1 — label shown to the user; $2 — default value used on empty input. +# Returns: 0 always; prints the resolved value to stdout. +_core_prompt_input() { + local label="$1" default="$2" + if _core_prompt_use_gum; then + local out + out=$(gum input --placeholder "$default" --prompt "$label > ") + echo "${out:-$default}" + return 0 + fi + local val="" + read -r "val?$label [$default]: " + echo "${val:-$default}" +} + +# Prompt for a yes/no confirmation. Default is no — empty input is treated as +# rejection so unattended runs fail closed. +# +# Args: $1 — label shown to the user. +# Returns: 0 on yes (input begins with y or Y); 1 otherwise. +_core_prompt_confirm() { + local label="$1" + if _core_prompt_use_gum; then + gum confirm "$label" + return $? + fi + local ans="" + read -r "ans?$label [y/N]: " + [[ "$ans" =~ ^[yY] ]] +} + +# Render a numbered list of items to stderr — helper for _core_prompt_choose's +# fallback path. Kept separate to keep _core_prompt_choose under the 25-line +# cap (matches the _core_style_table_print_row precedent in style.zsh). +# +# Args: $@ — items to render, one per line, prefixed with their 1-based index. +# Returns: 0 always; output goes to stderr so the chosen value (stdout) stays +# pipeable. +_core_prompt_choose_render_list() { + local index=1 item + for item in "$@"; do + printf ' %d) %s\n' "$index" "$item" >&2 + ((index++)) + done +} + +# Prompt the user to pick one of the supplied items. Echoes the chosen item +# to stdout. +# +# Args: $1 — label shown to the user; $2..$N — items to choose from. +# Returns: 0 on a valid pick; 1 if the input is non-numeric, less than 1, or +# greater than the number of items. +_core_prompt_choose() { + local label="$1" + shift + if _core_prompt_use_gum; then + printf '%s\n' "$@" | gum choose --header "$label" + return $? + fi + echo "$label" >&2 + _core_prompt_choose_render_list "$@" + local choice="" + read -r "choice?Selection: " + [[ "$choice" =~ ^[0-9]+$ ]] || return 1 + (( choice >= 1 && choice <= $# )) || return 1 + echo "${@[choice]}" +} + +# Run a command with a spinner indicator. Forwards the command's exit status. +# +# Args: $1 — label shown alongside the spinner; $2..$N — command and args to +# execute. The label is consumed by this function and never reaches the +# wrapped command. +# Returns: the exit status of the wrapped command. +_core_prompt_spin() { + local label="$1" + shift + if _core_prompt_use_gum; then + gum spin --spinner dot --title "$label" -- "$@" + return $? + fi + echo "$label..." >&2 + "$@" +} diff --git a/lib/core/prompt_test.bats b/lib/core/prompt_test.bats new file mode 100644 index 0000000..814f171 --- /dev/null +++ b/lib/core/prompt_test.bats @@ -0,0 +1,121 @@ +#!/usr/bin/env bats +# Module-level tests for lib/core/prompt.zsh. +# prompt.zsh is zsh-only (uses `read -r "?prompt"` and zsh array indexing), so +# every assertion spawns a zsh subshell that sources prompt.zsh and runs the +# function under test (matching the pattern in style_test.bats and +# help_test.bats). +# +# CKIPPER_NO_GUM=1 forces the pure-zsh fallback path so tests are deterministic +# regardless of whether `gum` is installed on the runner. Stderr is redirected +# to /dev/null inside the zsh -c command so the read-prompt label (printed to +# stderr by `read "?..."`) doesn't leak into bats `run`'s captured $output — +# without that, the default-value test is unable to distinguish "echoed +# default" from "default appears in the prompt label `[default]:`". + +load "${BATS_TEST_DIRNAME}/../../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env +} + +teardown() { + teardown_isolated_env +} + +# Helper: source prompt.zsh in zsh, feed stdin, run zsh_cmd with stderr muted. +# +# Args: $1 — stdin payload to pipe to the prompt; $2 — zsh command to execute. +# Side effect: populates $status / $output / $lines as with bats `run`. +_run_prompt() { + local stdin="$1" zsh_cmd="$2" + run env CKIPPER_NO_GUM=1 PATH="$PATH" \ + zsh -c "source \"$REPO_ROOT/lib/core/prompt.zsh\"; $zsh_cmd 2>/dev/null" <<<"$stdin" +} + +@test "_core_prompt_input echoes the user's input" { + _run_prompt "hello" '_core_prompt_input "Q" "thedefault"' + + [ "$status" -eq 0 ] + [ "$output" = "hello" ] +} + +@test "_core_prompt_input returns default on empty input" { + _run_prompt "" '_core_prompt_input "Q" "thedefault"' + + [ "$status" -eq 0 ] + [ "$output" = "thedefault" ] +} + +@test "_core_prompt_confirm returns 0 on y" { + _run_prompt "y" '_core_prompt_confirm "Proceed?"' + + [ "$status" -eq 0 ] +} + +@test "_core_prompt_confirm returns 0 on uppercase Y" { + _run_prompt "Y" '_core_prompt_confirm "Proceed?"' + + [ "$status" -eq 0 ] +} + +@test "_core_prompt_confirm returns 1 on n" { + _run_prompt "n" '_core_prompt_confirm "Proceed?"' + + [ "$status" -eq 1 ] +} + +@test "_core_prompt_confirm returns 1 on empty input" { + _run_prompt "" '_core_prompt_confirm "Proceed?"' + + [ "$status" -eq 1 ] +} + +@test "_core_prompt_choose returns the picked item by index" { + _run_prompt "1" '_core_prompt_choose "Pick" alpha beta gamma' + + [ "$status" -eq 0 ] + [ "$output" = "alpha" ] +} + +@test "_core_prompt_choose returns last item when last index is selected" { + _run_prompt "3" '_core_prompt_choose "Pick" alpha beta gamma' + + [ "$status" -eq 0 ] + [ "$output" = "gamma" ] +} + +@test "_core_prompt_choose returns 1 on out-of-range index" { + _run_prompt "99" '_core_prompt_choose "Pick" alpha beta gamma' + + [ "$status" -eq 1 ] +} + +@test "_core_prompt_choose returns 1 on non-numeric input" { + _run_prompt "abc" '_core_prompt_choose "Pick" alpha beta gamma' + + [ "$status" -eq 1 ] +} + +@test "_core_prompt_choose returns 1 on zero index" { + _run_prompt "0" '_core_prompt_choose "Pick" alpha beta gamma' + + [ "$status" -eq 1 ] +} + +@test "_core_prompt_use_gum returns 1 when CKIPPER_NO_GUM=1" { + _run_prompt "" '_core_prompt_use_gum' + + [ "$status" -eq 1 ] +} + +@test "_core_prompt_spin runs the command and forwards exit status 0" { + _run_prompt "" '_core_prompt_spin "Working" true' + + [ "$status" -eq 0 ] +} + +@test "_core_prompt_spin forwards non-zero exit status" { + _run_prompt "" '_core_prompt_spin "Working" false' + + [ "$status" -eq 1 ] +} diff --git a/lib/core/registry.zsh b/lib/core/registry.zsh new file mode 100644 index 0000000..8d020ed --- /dev/null +++ b/lib/core/registry.zsh @@ -0,0 +1,315 @@ +#!/usr/bin/env zsh +# Shared registry read/write primitives for managing the ckipper accounts registry. + +readonly _CORE_REGISTRY_FILE_PERMS=600 +readonly _CORE_REGISTRY_LOCK_NOTIFY_THRESHOLD_ATTEMPTS=30 +readonly _CORE_REGISTRY_LOCK_MAX_ATTEMPTS=200 +readonly _CORE_REGISTRY_STALE_LOCK_AGE_THRESHOLD_SECONDS=30 +readonly _CORE_REGISTRY_LOCK_RETRY_INTERVAL_SECONDS=0.05 + +# Perform an atomic registry update via flock (Linux/GNU systems). +# +# Args: +# $1 — jq filter string +# $@ — remaining args passed to jq +# +# Returns: +# 0 on success; 1 on jq or write failure. +_core_registry_update_with_flock() { + local jq_filter="$1"; shift + local lock="$CKIPPER_DIR/.registry.lock" + local rc=1 + : > "$lock" + { + flock -x 9 + local registry_tmpfile; registry_tmpfile=$(mktemp "$CKIPPER_DIR/.registry.tmp.XXXXXX") + if jq "$@" "$jq_filter" "$CKIPPER_REGISTRY" > "$registry_tmpfile" 2>/dev/null; then + mv "$registry_tmpfile" "$CKIPPER_REGISTRY" + chmod "$_CORE_REGISTRY_FILE_PERMS" "$CKIPPER_REGISTRY" + rc=0 + else + rm -f "$registry_tmpfile" + fi + } 9>"$lock" + return $rc +} + +# Recover a stale mkdir-based lock directory and reset the attempt counter. +# Prints a warning to stderr, tries rmdir first, then falls back to rm -rf with warning. +# +# Args: +# $1 — lockdir path +# $2 — age in seconds (for the message) +# +# Returns: +# 0 after recovery attempt. +# +# Errors (stderr): +# "Cleaning up old lock..." — always printed when called. +# "Warning: rmdir failed..." — when rmdir fails and rm -rf fallback is used. +_core_registry_recover_stale_lock() { + local lockdir="$1" lock_age_seconds="$2" + echo "Cleaning up old lock from a previous session (age ${lock_age_seconds}s)..." >&2 + if ! rmdir "$lockdir" 2>/dev/null; then + echo "Warning: rmdir failed on lock dir (unexpected contents); using rm -rf as fallback." >&2 + rm -rf "$lockdir" + fi +} + +# Check if the mkdir lock is stale and recover if so. +# +# Args: +# $1 — lockdir path +# $2 — current attempts count +# +# Returns: +# 0 if stale lock was recovered (caller should reset attempts); +# 1 if not stale or lock age could not be determined; +# 2 if the lock is live but held too long (caller should abort). +_core_registry_check_stale_lock() { + local lockdir="$1" attempts="$2" + (( attempts < _CORE_REGISTRY_LOCK_MAX_ATTEMPTS )) && return 1 + local current_time_epoch modification_time_epoch lock_age_seconds + current_time_epoch=$(date +%s) + modification_time_epoch=$(_core_stat_mtime "$lockdir") + lock_age_seconds=$(( current_time_epoch - ${modification_time_epoch:-$current_time_epoch} )) + if (( lock_age_seconds > _CORE_REGISTRY_STALE_LOCK_AGE_THRESHOLD_SECONDS )); then + _core_registry_recover_stale_lock "$lockdir" "$lock_age_seconds" + return 0 + fi + echo "Registry lock held by another process for ${lock_age_seconds}s. Try again shortly." >&2 + return 2 +} + +# Wait for the mkdir lock to become available, with stale-lock recovery. +# The caller is responsible for releasing the lock — DO NOT install an EXIT +# trap here. In zsh, an EXIT trap set inside a function fires when *that* +# function returns, which would remove the lockdir before the caller's +# critical section runs. The trap belongs in the caller (the function whose +# lifetime spans the critical section). +# +# Args: +# $1 — lockdir path +# +# Returns: +# 0 when lock is acquired; 1 on timeout. +_core_registry_acquire_mkdir_lock() { + local lockdir="$1" + local attempts=0 has_notified="false" + while ! mkdir "$lockdir" 2>/dev/null; do + (( attempts++ )) + if (( attempts == _CORE_REGISTRY_LOCK_NOTIFY_THRESHOLD_ATTEMPTS )) && [[ "$has_notified" = "false" ]]; then + echo "Waiting on registry lock..." >&2 + has_notified="true" + fi + local stale_rc + _core_registry_check_stale_lock "$lockdir" "$attempts"; stale_rc=$? + if (( stale_rc == 0 )); then + attempts=0 + continue + fi + (( stale_rc == 2 )) && return 1 + sleep "$_CORE_REGISTRY_LOCK_RETRY_INTERVAL_SECONDS" + done +} + +# Perform an atomic registry update via mkdir lock (macOS fallback — no flock). +# +# Args: +# $1 — jq filter string +# $@ — remaining args passed to jq +# +# Returns: +# 0 on success; 1 on lock timeout or jq/write failure. +_core_registry_update_mkdir_fallback() { + local jq_filter="$1"; shift + setopt local_options local_traps + local lockdir="$CKIPPER_DIR/.registry.lock.d" + _core_registry_acquire_mkdir_lock "$lockdir" || return 1 + # Trap lives in this function (not in acquire) so it fires when the + # critical section is done — not when acquire returns mid-critical-section. + # Use double-quoted trap text so $lockdir is expanded NOW (at trap-set time); + # by the time the trap actually fires (after this function returns), our + # local $lockdir is out of scope, so a deferred-expansion form (single quotes) + # would expand to the empty string and rmdir would silently no-op. + trap "rmdir '$lockdir' 2>/dev/null" EXIT + local registry_tmpfile; registry_tmpfile=$(mktemp "$CKIPPER_DIR/.registry.tmp.XXXXXX") + if jq "$@" "$jq_filter" "$CKIPPER_REGISTRY" > "$registry_tmpfile" 2>/dev/null; then + mv "$registry_tmpfile" "$CKIPPER_REGISTRY" + chmod "$_CORE_REGISTRY_FILE_PERMS" "$CKIPPER_REGISTRY" + return 0 + fi + rm -f "$registry_tmpfile" + return 1 +} + +# Atomic registry write under flock (or mkdir-fallback for macOS). +# +# Args: +# $1 — jq filter string; jq error() calls propagate as non-zero exit. +# $@ — remaining args passed through to jq (e.g. --arg n "$name") +# +# Returns: +# 0 on successful jq+write; 1 on jq error or write failure. +_core_registry_update() { + mkdir -p "$CKIPPER_DIR" + if command -v flock >/dev/null 2>&1; then + _core_registry_update_with_flock "$@" + else + _core_registry_update_mkdir_fallback "$@" + fi +} + +# Initialize an empty registry with version field. Idempotent under concurrency +# via atomic create (mv -n) — two concurrent ckipper init's won't clobber each other. +# +# Returns: +# 0 always. +# +# Errors (stderr): +# "Error: CKIPPER_REGISTRY_VERSION is not a positive integer" — when version var is invalid. +_core_registry_init() { + [[ -f "$CKIPPER_REGISTRY" ]] && return 0 + if [[ ! "$CKIPPER_REGISTRY_VERSION" =~ ^[1-9][0-9]*$ ]]; then + echo "Error: CKIPPER_REGISTRY_VERSION is not a positive integer: '$CKIPPER_REGISTRY_VERSION'" >&2 + return 1 + fi + mkdir -p "$CKIPPER_DIR" + local registry_tmpfile; registry_tmpfile=$(mktemp "$CKIPPER_DIR/.registry.init.XXXXXX") + jq -n --argjson v "$CKIPPER_REGISTRY_VERSION" \ + '{"version": $v, "default": null, "accounts": {}}' > "$registry_tmpfile" + # mv -n (no-clobber): if another writer beat us, leave their file alone. + mv -n "$registry_tmpfile" "$CKIPPER_REGISTRY" 2>/dev/null || rm -f "$registry_tmpfile" + [[ -f "$CKIPPER_REGISTRY" ]] && chmod "$_CORE_REGISTRY_FILE_PERMS" "$CKIPPER_REGISTRY" +} + +# Build a JSON object of every account-scope schema key with its default +# value, suitable for embedding in a jq filter via `--argjson p "$(...)"`. +# Used by both the v1→v2 migration and the account-add finalize step so the +# two callers cannot drift from the schema. +# +# Reads: _CKIPPER_SCHEMA_TYPE, _CKIPPER_SCHEMA_DEFAULT, _CKIPPER_SCHEMA_SCOPE +# (lib/core/schema.zsh — must be sourced before this is called). +# +# Limitations: only handles bool, int, string, and path types. The current +# schema has no account-scope `int_array` keys; if one is added, extend the +# case below to render the comma-separated default as a JSON array. +# +# Returns: 0; emits a valid JSON object string to stdout (e.g. +# `{"always_docker":false,"always_firewall":false,"ssh_forward":true}`). +_core_registry_account_defaults_json() { + local key entries="" + for key in "${(@ko)_CKIPPER_SCHEMA_TYPE}"; do + [[ "${_CKIPPER_SCHEMA_SCOPE[$key]}" == "account" ]] || continue + local val="${_CKIPPER_SCHEMA_DEFAULT[$key]}" + local type="${_CKIPPER_SCHEMA_TYPE[$key]}" + case "$type" in + bool | int) entries+="\"$key\":$val," ;; + *) entries+="\"$key\":\"$val\"," ;; + esac + done + echo "{${entries%,}}" +} + +# Auto-migrate a v1 registry to v2 in place. +# Backs up the v1 file (refuses to migrate without a backup), then rewrites +# accounts.json with .version=2 and a per-account .preferences block. Existing +# preferences win over defaults so partial-v2 fixtures keep their values. +# +# Returns: +# 0 on successful migration; 1 if backup write or jq update failed. +# +# Errors (stderr): +# "Error: failed to write migration backup..." — when cp to the .v1.bak path fails. +_core_registry_migrate_v1_to_v2() { + local backup="${CKIPPER_REGISTRY}.v1.bak.$(date -u +%Y%m%dT%H%M%SZ)" + if ! cp "$CKIPPER_REGISTRY" "$backup" 2>/dev/null; then + echo "Error: failed to write migration backup $backup" >&2 + return 1 + fi + local defaults + defaults=$(_core_registry_account_defaults_json) + _core_registry_update ' + .version = 2 + | .accounts = ( + .accounts | with_entries( + .value.preferences = ($defaults + (.value.preferences // {})) + ) + ) + ' --argjson defaults "$defaults" +} + +# Refuse to operate on a registry whose version we don't understand OR whose schema +# is corrupt (e.g. user manually edited and turned .accounts into an array). +# Auto-migrates a v1 registry to v2 (with backup) before checking the version. +# +# Returns: +# 0 if registry is absent or valid; 1 on version mismatch, migration failure, +# or corrupt schema. +# +# Errors (stderr): +# "Migrating accounts.json v1 → v2..." — informational notice during auto-migration. +# "Error: registry version..." — on version mismatch. +# "Error: ... is corrupt..." — on bad schema. +_core_registry_check_version() { + [[ ! -f "$CKIPPER_REGISTRY" ]] && return 0 + local cur + cur=$(jq -r '.version // 0' "$CKIPPER_REGISTRY" 2>/dev/null) + if [[ "$cur" == "1" ]] && (( CKIPPER_REGISTRY_VERSION >= 2 )); then + echo "Migrating accounts.json v1 → v2..." >&2 + _core_registry_migrate_v1_to_v2 || return 1 + fi + local v + v=$(jq -r '.version // 0' "$CKIPPER_REGISTRY" 2>/dev/null) + if (( v != CKIPPER_REGISTRY_VERSION )); then + echo "Error: registry version $v not supported (this ckipper expects $CKIPPER_REGISTRY_VERSION). Update ckipper or restore from backup." >&2 + return 1 + fi + _core_registry_assert_accounts_object || return 1 +} + +# Verify that .accounts is a JSON object (not an array or other type). +# Surface a clear error with manual-recovery instructions when it isn't. +# +# Returns: +# 0 if the schema looks valid; 1 if .accounts is corrupt. +# +# Errors (stderr): +# "Error: ... is corrupt..." — when .accounts is not an object. +_core_registry_assert_accounts_object() { + if jq -e '.accounts | type == "object"' "$CKIPPER_REGISTRY" >/dev/null 2>&1; then + return 0 + fi + echo "Error: $CKIPPER_REGISTRY is corrupt (.accounts is not an object)." >&2 + echo "Backup and re-init manually:" >&2 + echo " mv $CKIPPER_REGISTRY $CKIPPER_REGISTRY.corrupt-\$(date +%s)" >&2 + return 1 +} + +# Validate that an account exists in the registry. Echoes its config_dir on success. +# +# Args: +# $1 — account name +# +# Returns: +# 0 with config_dir on stdout; 1 if account not found. +# +# Errors (stderr): +# "Account '...' is not registered." — when account absent. +_core_account_dir() { + local name="$1" + if ! jq -e --arg n "$name" '.accounts[$n]' "$CKIPPER_REGISTRY" >/dev/null 2>&1; then + echo "Account '$name' is not registered." >&2 + return 1 + fi + jq -r --arg n "$name" '.accounts[$n].config_dir' "$CKIPPER_REGISTRY" +} + +# Read the entire registry JSON to stdout. Used by both ckipper subcommands +# and (later) by lib/w/ to satisfy the no-sibling-cross-imports rule. +# +# Returns: +# 0 on success; non-zero if registry file is missing. +_core_registry_read() { + cat "$CKIPPER_REGISTRY" +} diff --git a/lib/core/registry_migration_test.bats b/lib/core/registry_migration_test.bats new file mode 100644 index 0000000..e6dcaf4 --- /dev/null +++ b/lib/core/registry_migration_test.bats @@ -0,0 +1,225 @@ +#!/usr/bin/env bats +# Module-level tests for the v1 -> v2 auto-migration in lib/core/registry.zsh. +# Covers detect, mutate, defaults, backup, idempotency, and existing-prefs merge. + +load "${BATS_TEST_DIRNAME}/../../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env + export CKIPPER_REGISTRY_VERSION=2 + export _CKIPPER_TEST_OSTYPE="darwin19.0" +} + +teardown() { + teardown_isolated_env +} + +# Helper: source registry (and its utils dep) then run zsh_cmd. +# Mirrors the helper in registry_test.bats, but defaults the version env var to 2 +# so the migration path under test is exercised. +_run_registry() { + local zsh_cmd="$1" + run env HOME="$TMP_HOME" \ + CKIPPER_DIR="$CKIPPER_DIR" \ + CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + CKIPPER_REGISTRY_VERSION="${CKIPPER_REGISTRY_VERSION:-2}" \ + _CKIPPER_TEST_OSTYPE="${_CKIPPER_TEST_OSTYPE:-darwin19.0}" \ + PATH="$PATH" \ + zsh -c "source \"$REPO_ROOT/lib/core/utils.zsh\"; source \"$REPO_ROOT/lib/core/schema.zsh\"; source \"$REPO_ROOT/lib/core/registry.zsh\"; $zsh_cmd" +} + +# Seed a v1 registry fixture with a single account. +_seed_v1_registry() { + cat > "$CKIPPER_REGISTRY" <<'EOF' +{ + "version": 1, + "default": "personal", + "accounts": { + "personal": { + "config_dir": "/tmp/.claude-personal", + "keychain_service": "Claude Code-credentials", + "registered_at": "2026-04-30T12:00:00Z" + } + } +} +EOF + chmod 600 "$CKIPPER_REGISTRY" +} + +@test "v1 registry migrates to v2 on _core_registry_check_version" { + _seed_v1_registry + + _run_registry "_core_registry_check_version" + + [ "$status" -eq 0 ] + local v; v=$(jq -r '.version' "$CKIPPER_REGISTRY") + [ "$v" = "2" ] +} + +@test "v2 migration adds preferences with safe defaults" { + _seed_v1_registry + + _run_registry "_core_registry_check_version" + + [ "$status" -eq 0 ] + local always_docker always_firewall ssh_forward + always_docker=$(jq -r '.accounts.personal.preferences.always_docker' "$CKIPPER_REGISTRY") + always_firewall=$(jq -r '.accounts.personal.preferences.always_firewall' "$CKIPPER_REGISTRY") + ssh_forward=$(jq -r '.accounts.personal.preferences.ssh_forward' "$CKIPPER_REGISTRY") + [ "$always_docker" = "false" ] + [ "$always_firewall" = "false" ] + [ "$ssh_forward" = "true" ] +} + +@test "v2 migration preserves existing fields" { + _seed_v1_registry + + _run_registry "_core_registry_check_version" + + [ "$status" -eq 0 ] + local default config_dir keychain registered_at + default=$(jq -r '.default' "$CKIPPER_REGISTRY") + config_dir=$(jq -r '.accounts.personal.config_dir' "$CKIPPER_REGISTRY") + keychain=$(jq -r '.accounts.personal.keychain_service' "$CKIPPER_REGISTRY") + registered_at=$(jq -r '.accounts.personal.registered_at' "$CKIPPER_REGISTRY") + [ "$default" = "personal" ] + [ "$config_dir" = "/tmp/.claude-personal" ] + [ "$keychain" = "Claude Code-credentials" ] + [ "$registered_at" = "2026-04-30T12:00:00Z" ] +} + +@test "v2 migration writes a backup file containing the pre-migration JSON" { + _seed_v1_registry + local pre_contents; pre_contents=$(cat "$CKIPPER_REGISTRY") + + _run_registry "_core_registry_check_version" + + [ "$status" -eq 0 ] + local -a backups + backups=( "$CKIPPER_REGISTRY".v1.bak.* ) + [ -f "${backups[0]}" ] + local backup_contents; backup_contents=$(cat "${backups[0]}") + [ "$backup_contents" = "$pre_contents" ] +} + +@test "v2 migration is idempotent (no-op on already-v2)" { + _seed_v1_registry + + # First call performs the migration. + _run_registry "_core_registry_check_version" + [ "$status" -eq 0 ] + local after_first; after_first=$(cat "$CKIPPER_REGISTRY") + local first_backup_count; first_backup_count=$(ls -1 "$CKIPPER_REGISTRY".v1.bak.* 2>/dev/null | wc -l | tr -d ' ') + + # Second call should observe v2 and do nothing further. + sleep 1 # ensure any second backup would land in a distinct timestamp slot + _run_registry "_core_registry_check_version" + [ "$status" -eq 0 ] + local after_second; after_second=$(cat "$CKIPPER_REGISTRY") + local second_backup_count; second_backup_count=$(ls -1 "$CKIPPER_REGISTRY".v1.bak.* 2>/dev/null | wc -l | tr -d ' ') + + [ "$after_first" = "$after_second" ] + [ "$first_backup_count" = "$second_backup_count" ] +} + +@test "v2 migration preserves existing preferences (defaults merge under existing values)" { + cat > "$CKIPPER_REGISTRY" <<'EOF' +{ + "version": 1, + "default": "personal", + "accounts": { + "personal": { + "config_dir": "/tmp/.claude-personal", + "keychain_service": null, + "preferences": {"ssh_forward": false} + } + } +} +EOF + chmod 600 "$CKIPPER_REGISTRY" + + _run_registry "_core_registry_check_version" + + [ "$status" -eq 0 ] + local ssh_forward always_docker always_firewall + ssh_forward=$(jq -r '.accounts.personal.preferences.ssh_forward' "$CKIPPER_REGISTRY") + always_docker=$(jq -r '.accounts.personal.preferences.always_docker' "$CKIPPER_REGISTRY") + always_firewall=$(jq -r '.accounts.personal.preferences.always_firewall' "$CKIPPER_REGISTRY") + [ "$ssh_forward" = "false" ] + [ "$always_docker" = "false" ] + [ "$always_firewall" = "false" ] +} + +# ── End-to-end migration triggers ──────────────────────────────────── +# These tests confirm the v1→v2 auto-migration fires from each entry-point +# the user is likely to invoke first on an upgrade. Fix #2 added +# `_core_registry_check_version` to the config and account-list handlers +# that previously lacked it; without these tests, a future regression +# (e.g. forgetting to add the call to a new entry-point) would silently +# write v2-shape `preferences` blocks while leaving `.version=1`. + +# Seed a v1 registry with a `work` account (used as the invocation target). +_seed_v1_registry_work() { + cat > "$CKIPPER_REGISTRY" <<'EOF' +{ + "version": 1, + "default": "work", + "accounts": { + "work": { + "config_dir": "/tmp/.claude-work", + "keychain_service": null, + "registered_at": "2026-04-30T12:00:00Z" + } + } +} +EOF + chmod 600 "$CKIPPER_REGISTRY" + mkdir -p "$CKIPPER_DIR/docker" + : > "$CKIPPER_DIR/docker/ckipper-config.zsh" +} + +@test "ckipper config set on v1 registry triggers migration" { + _seed_v1_registry_work + + run_ckipper config set --account work always_docker true + + [ "$status" -eq 0 ] + local v always_docker + v=$(jq -r '.version' "$CKIPPER_REGISTRY") + always_docker=$(jq -r '.accounts.work.preferences.always_docker' "$CKIPPER_REGISTRY") + [ "$v" = "2" ] + [ "$always_docker" = "true" ] + # The migration backup must be present. + local -a backups + backups=( "$CKIPPER_REGISTRY".v1.bak.* ) + [ -f "${backups[0]}" ] +} + +@test "ckipper config get on v1 registry triggers migration" { + _seed_v1_registry_work + + run_ckipper config get notify_bell + + [ "$status" -eq 0 ] + local v + v=$(jq -r '.version' "$CKIPPER_REGISTRY") + [ "$v" = "2" ] +} + +@test "_core_registry_account_defaults_json emits all account-scope schema defaults" { + # Source the schema and helper, dump JSON, validate keys + values via jq. + _run_registry "_core_registry_account_defaults_json" + + [ "$status" -eq 0 ] + # Output must be parseable JSON. + echo "$output" | jq empty + # Every account-scope key in the current schema must be present with + # the expected default value. + local always_docker always_firewall ssh_forward + always_docker=$(echo "$output" | jq -r '.always_docker') + always_firewall=$(echo "$output" | jq -r '.always_firewall') + ssh_forward=$(echo "$output" | jq -r '.ssh_forward') + [ "$always_docker" = "false" ] + [ "$always_firewall" = "false" ] + [ "$ssh_forward" = "true" ] +} diff --git a/lib/core/registry_test.bats b/lib/core/registry_test.bats new file mode 100644 index 0000000..20ae488 --- /dev/null +++ b/lib/core/registry_test.bats @@ -0,0 +1,150 @@ +#!/usr/bin/env bats +# Module-level tests for lib/core/registry.zsh. +# Covers init, check_version, account_dir, registry_read, and update. + +load "${BATS_TEST_DIRNAME}/../../tests/lib/test-helper.bash" + +setup() { + setup_isolated_env + export CKIPPER_REGISTRY_VERSION=1 + export _CKIPPER_TEST_OSTYPE="darwin19.0" +} + +teardown() { + teardown_isolated_env +} + +# Helper: source registry (and its utils dep) then run zsh_cmd. +_run_registry() { + local zsh_cmd="$1" + run env HOME="$TMP_HOME" \ + CKIPPER_DIR="$CKIPPER_DIR" \ + CKIPPER_REGISTRY="$CKIPPER_REGISTRY" \ + CKIPPER_REGISTRY_VERSION="${CKIPPER_REGISTRY_VERSION:-1}" \ + _CKIPPER_TEST_OSTYPE="${_CKIPPER_TEST_OSTYPE:-darwin19.0}" \ + PATH="$PATH" \ + zsh -c "source \"$REPO_ROOT/lib/core/utils.zsh\"; source \"$REPO_ROOT/lib/core/registry.zsh\"; $zsh_cmd" +} + +@test "_core_registry_init creates a fresh registry with mode 600" { + _run_registry "_core_registry_init" + + [ "$status" -eq 0 ] + assert_file_exists "$CKIPPER_REGISTRY" + assert_file_mode "$CKIPPER_REGISTRY" "600" +} + +@test "_core_registry_init produces valid JSON with version 1" { + _run_registry "_core_registry_init" + + [ "$status" -eq 0 ] + local version + version=$(jq -r '.version' "$CKIPPER_REGISTRY") + [ "$version" = "1" ] +} + +@test "_core_registry_init is idempotent (does not overwrite existing registry)" { + echo '{"version":1,"default":"preserved","accounts":{}}' > "$CKIPPER_REGISTRY" + chmod 600 "$CKIPPER_REGISTRY" + + _run_registry "_core_registry_init" + + [ "$status" -eq 0 ] + local default + default=$(jq -r '.default' "$CKIPPER_REGISTRY") + [ "$default" = "preserved" ] +} + +@test "_core_registry_init rejects non-integer version" { + export CKIPPER_REGISTRY_VERSION="not-a-number" + + _run_registry "_core_registry_init" + + [ "$status" -ne 0 ] +} + +@test "_core_registry_check_version passes on matching version" { + echo '{"version":1,"default":null,"accounts":{}}' > "$CKIPPER_REGISTRY" + + _run_registry "_core_registry_check_version" + + [ "$status" -eq 0 ] +} + +@test "_core_registry_check_version fails on version mismatch" { + echo '{"version":99,"default":null,"accounts":{}}' > "$CKIPPER_REGISTRY" + + _run_registry "_core_registry_check_version" + + [ "$status" -ne 0 ] + [[ "$output" =~ "version" ]] +} + +@test "_core_account_dir returns config_dir for a known account" { + echo '{"version":1,"default":null,"accounts":{"work":{"config_dir":"/tmp/.claude-work","keychain_service":null}}}' > "$CKIPPER_REGISTRY" + + _run_registry "_core_account_dir work" + + [ "$status" -eq 0 ] + [ "$output" = "/tmp/.claude-work" ] +} + +@test "_core_account_dir fails for an unknown account" { + echo '{"version":1,"default":null,"accounts":{}}' > "$CKIPPER_REGISTRY" + + _run_registry "_core_account_dir nobody" + + [ "$status" -ne 0 ] + [[ "$output" =~ "not registered" ]] +} + +# Regression: in zsh, an EXIT trap set inside a function fires when that +# function returns, regardless of `local_traps` in the caller. The previous +# implementation set the trap inside _core_registry_acquire_mkdir_lock, so +# the lockdir was removed before the caller's jq write — a concurrent writer +# could grab the lock and clobber the in-flight update. The trap must live +# in _core_registry_update_mkdir_fallback (the function that owns the +# critical section). +@test "_core_registry_acquire_mkdir_lock leaves lockdir intact for caller after acquire returns" { + # Caller mimics the production pattern in _core_registry_update_mkdir_fallback: + # declares its own `local lockdir` (so any stray trap referencing $lockdir resolves + # in caller scope) and asserts the lockdir is still present when acquire returns. + _run_registry ' + caller() { + setopt local_options local_traps + local lockdir="$CKIPPER_DIR/.registry.lock.d" + _core_registry_acquire_mkdir_lock "$lockdir" || return 1 + [[ -d "$lockdir" ]] || { + echo "BUG: lockdir was removed before caller could use it" >&2 + return 2 + } + rmdir "$lockdir" + } + caller + ' + + [ "$status" -eq 0 ] +} + +@test "_core_registry_update_mkdir_fallback releases lockdir after successful update" { + echo '{"version":1,"default":null,"accounts":{}}' > "$CKIPPER_REGISTRY" + chmod 600 "$CKIPPER_REGISTRY" + + _run_registry '_core_registry_update_mkdir_fallback ".default = \"alice\""' + + [ "$status" -eq 0 ] + [[ ! -d "$CKIPPER_DIR/.registry.lock.d" ]] + local default + default=$(jq -r '.default' "$CKIPPER_REGISTRY") + [ "$default" = "alice" ] +} + +@test "_core_registry_update_mkdir_fallback releases lockdir after jq failure" { + echo '{"version":1,"default":null,"accounts":{}}' > "$CKIPPER_REGISTRY" + chmod 600 "$CKIPPER_REGISTRY" + + _run_registry '_core_registry_update_mkdir_fallback "this is not a valid jq filter @@@"' + + [ "$status" -ne 0 ] + [[ ! -d "$CKIPPER_DIR/.registry.lock.d" ]] +} diff --git a/lib/core/schema.zsh b/lib/core/schema.zsh new file mode 100644 index 0000000..a7650f9 --- /dev/null +++ b/lib/core/schema.zsh @@ -0,0 +1,67 @@ +#!/usr/bin/env zsh +# Single source of truth for Ckipper's user-configurable settings. +# +# Consumed by: +# - lib/config/ — user-facing get/set/unset/list +# - lib/setup/ — wizard prompts +# - lib/account/doctor.zsh — schema verification +# +# To add a new key: append to all four arrays below. The dispatcher and +# wizard pick it up automatically. + +# Type of each key. Currently supported: "string", "bool", "int", "path", "int_array". +typeset -gA _CKIPPER_SCHEMA_TYPE=( + [projects_dir]="path" + [worktrees_dir]="path" + [ports]="int_array" + [default_branch]="string" + [dep_install_cmd]="string" + [notify_bell]="bool" + [aliases_auto_source]="bool" + [always_docker]="bool" + [always_firewall]="bool" + [ssh_forward]="bool" +) + +# Default value (used when key is unset / on schema migration). +typeset -gA _CKIPPER_SCHEMA_DEFAULT=( + [projects_dir]="$HOME/Developer" + [worktrees_dir]="" + [ports]="3000" + [default_branch]="" + [dep_install_cmd]="npm install" + [notify_bell]="true" + [aliases_auto_source]="true" + [always_docker]="false" + [always_firewall]="false" + [ssh_forward]="true" +) + +# Scope: "global" (lives in ~/.ckipper/docker/ckipper-config.zsh) or +# "account" (lives in ~/.ckipper/accounts.json under accounts..preferences). +typeset -gA _CKIPPER_SCHEMA_SCOPE=( + [projects_dir]="global" + [worktrees_dir]="global" + [ports]="global" + [default_branch]="global" + [dep_install_cmd]="global" + [notify_bell]="global" + [aliases_auto_source]="global" + [always_docker]="account" + [always_firewall]="account" + [ssh_forward]="account" +) + +# One-line description shown by `ckipper config list` and the wizard. +typeset -gA _CKIPPER_SCHEMA_DESCRIPTION=( + [projects_dir]="Base directory containing your git projects." + [worktrees_dir]="Where worktrees are created (default: \$projects_dir/.worktrees)." + [ports]="Comma-separated ports to forward from container to host." + [default_branch]="Fallback base branch when origin/HEAD is unset." + [dep_install_cmd]="Command run after worktree creation. Empty = skip." + [notify_bell]="Install notify-bell hook into account dirs." + [aliases_auto_source]="install.sh auto-adds aliases.zsh source line to .zshrc." + [always_docker]="Default --docker on for this account." + [always_firewall]="Default --firewall on for this account." + [ssh_forward]="Forward host ~/.ssh into containers run with this account." +) diff --git a/lib/core/schema_test.bats b/lib/core/schema_test.bats new file mode 100644 index 0000000..6fca0f5 --- /dev/null +++ b/lib/core/schema_test.bats @@ -0,0 +1,76 @@ +#!/usr/bin/env bats +# Module-level tests for lib/core/schema.zsh. +# Verifies the four schema arrays (TYPE, DEFAULT, SCOPE, DESCRIPTION) declare +# the expected keys with the expected values. Schema is data-only zsh; bats +# runs in bash, so each assertion spawns a zsh subshell that sources the file +# and prints the value under test. + +load "${BATS_TEST_DIRNAME}/../../tests/lib/test-helper.bash" + +# Helper: source schema.zsh in zsh and print the array entry $1[$2]. +_schema_lookup() { + local array_name="$1" key="$2" + run zsh -c "source \"$REPO_ROOT/lib/core/schema.zsh\"; print -- \"\${${array_name}[${key}]}\"" + [ "$status" -eq 0 ] +} + +@test "schema declares known global keys" { + _schema_lookup _CKIPPER_SCHEMA_TYPE default_branch + [ "$output" = "string" ] + + _schema_lookup _CKIPPER_SCHEMA_TYPE dep_install_cmd + [ "$output" = "string" ] + + _schema_lookup _CKIPPER_SCHEMA_TYPE notify_bell + [ "$output" = "bool" ] + + _schema_lookup _CKIPPER_SCHEMA_TYPE aliases_auto_source + [ "$output" = "bool" ] +} + +@test "schema declares known per-account keys" { + _schema_lookup _CKIPPER_SCHEMA_SCOPE always_docker + [ "$output" = "account" ] + + _schema_lookup _CKIPPER_SCHEMA_SCOPE always_firewall + [ "$output" = "account" ] + + _schema_lookup _CKIPPER_SCHEMA_SCOPE ssh_forward + [ "$output" = "account" ] +} + +@test "schema defaults preserve current behavior" { + _schema_lookup _CKIPPER_SCHEMA_DEFAULT notify_bell + [ "$output" = "true" ] + + _schema_lookup _CKIPPER_SCHEMA_DEFAULT aliases_auto_source + [ "$output" = "true" ] + + _schema_lookup _CKIPPER_SCHEMA_DEFAULT dep_install_cmd + [ "$output" = "npm install" ] + + _schema_lookup _CKIPPER_SCHEMA_DEFAULT always_docker + [ "$output" = "false" ] + + _schema_lookup _CKIPPER_SCHEMA_DEFAULT always_firewall + [ "$output" = "false" ] + + _schema_lookup _CKIPPER_SCHEMA_DEFAULT ssh_forward + [ "$output" = "true" ] +} + +@test "schema description present for every key" { + # Walk every key in _CKIPPER_SCHEMA_TYPE and require a non-empty + # _CKIPPER_SCHEMA_DESCRIPTION entry. Any missing key is printed by name. + run zsh -c " + source \"$REPO_ROOT/lib/core/schema.zsh\" + for key in \"\${(@k)_CKIPPER_SCHEMA_TYPE}\"; do + if [[ -z \"\${_CKIPPER_SCHEMA_DESCRIPTION[\$key]}\" ]]; then + print -- \"missing description for \$key\" + exit 1 + fi + done + exit 0 + " + [ "$status" -eq 0 ] +} diff --git a/lib/core/style.zsh b/lib/core/style.zsh new file mode 100644 index 0000000..a0fe913 --- /dev/null +++ b/lib/core/style.zsh @@ -0,0 +1,126 @@ +#!/usr/bin/env zsh +# ANSI color/style helpers for ckipper CLI output. +# +# All public helpers degrade gracefully when color is disabled (NO_COLOR set, +# or stdout is not a TTY). Tests pin behavior with CKIPPER_FORCE_COLOR=1, which +# overrides every other check so output is deterministic in non-TTY runs. +# +# Decision precedence in _core_style_enabled: +# 1. CKIPPER_FORCE_COLOR=1 → enabled (test override; wins over NO_COLOR). +# 2. NO_COLOR set non-empty → disabled (https://no-color.org). +# 3. stdout is a TTY → enabled. +# 4. otherwise → disabled. + +# Width of the horizontal rule drawn by _core_style_divider / _core_style_header. +readonly _CORE_STYLE_DIVIDER_WIDTH=72 + +# Column width used by _core_style_table for printf %-N s formatting. +readonly _CORE_STYLE_TABLE_COL_WIDTH=22 + +# ANSI reset sequence — emitted at the end of every colored span. +readonly _CORE_STYLE_RESET=$'\x1b[0m' + +# Map of friendly color/style names → ANSI SGR parameter codes. +# `gray` is mapped to bright-black (90) — true 8-color "gray" is rendered as +# bright-black on every terminal we care about. +typeset -gA _CORE_STYLE_COLOR_CODE=( + [red]=31 + [green]=32 + [yellow]=33 + [blue]=34 + [magenta]=35 + [cyan]=36 + [gray]=90 + [bold]=1 + [dim]=2 + [reset]=0 +) + +# Decide whether to emit ANSI color codes. +# +# Returns: 0 if color should be emitted; 1 otherwise. +_core_style_enabled() { + [[ "$CKIPPER_FORCE_COLOR" == "1" ]] && return 0 + [[ -n "$NO_COLOR" ]] && return 1 + [[ -t 1 ]] +} + +# Print text wrapped in an ANSI color escape, or plain text when color is disabled. +# +# Args: $1 — color name (must be a key of _CORE_STYLE_COLOR_CODE), +# $2 — text to wrap. +# Returns: 0 always; prints the (possibly colored) text followed by a newline. +_core_style_color() { + local name="$1" text="$2" + if ! _core_style_enabled; then + printf '%s\n' "$text" + return 0 + fi + local code="${_CORE_STYLE_COLOR_CODE[$name]}" + printf '\x1b[%sm%s%s\n' "$code" "$text" "$_CORE_STYLE_RESET" +} + +# Print a bracketed status badge in the given color (e.g. "[PASS]" in green). +# When color is disabled, prints "[