diff --git a/openspec/changes/packaging-01-bundle-resource-payloads/CHANGE_VALIDATION.md b/openspec/changes/packaging-01-bundle-resource-payloads/CHANGE_VALIDATION.md index 15b2c84..a1097d3 100644 --- a/openspec/changes/packaging-01-bundle-resource-payloads/CHANGE_VALIDATION.md +++ b/openspec/changes/packaging-01-bundle-resource-payloads/CHANGE_VALIDATION.md @@ -1,10 +1,11 @@ # Change Validation -- Date: 2026-03-24 +- Date: 2026-03-26 - Command: `openspec validate packaging-01-bundle-resource-payloads --strict` - Result: `Change 'packaging-01-bundle-resource-payloads' is valid` ## Notes -- Validation completed after proposal, design, specs, and tasks were created. -- The change is apply-ready and tracked in `openspec/CHANGE_ORDER.md` under Packaging and bundle payloads. +- The change scope has been extended to cover the missing backlog slash-prompt inventory and explicit installed-module IDE discovery verification. +- Proposal, design, tasks, delta specs, and the resource ownership audit now all require `specfact-backlog` prompt packaging in addition to backlog field-mapping seed packaging. +- Validation was re-run after the scope extension and the change remains strict-valid. diff --git a/openspec/changes/packaging-01-bundle-resource-payloads/RESOURCE_OWNERSHIP_AUDIT.md b/openspec/changes/packaging-01-bundle-resource-payloads/RESOURCE_OWNERSHIP_AUDIT.md index 6494fb4..a87910b 100644 --- a/openspec/changes/packaging-01-bundle-resource-payloads/RESOURCE_OWNERSHIP_AUDIT.md +++ b/openspec/changes/packaging-01-bundle-resource-payloads/RESOURCE_OWNERSHIP_AUDIT.md @@ -19,10 +19,20 @@ Date: 2026-03-25 | `resources/prompts/specfact.validate.md` | `specfact-codebase` | Covers `specfact repro`. | | `resources/prompts/shared/cli-enforcement.md` | Shared companion resource | Referenced by prompt templates via relative path; export is broken if this file is not shipped/copied with prompts. | -### Historical prompt leftovers observed outside the current source tree +### Backlog prompt inventory requiring restoration into the backlog bundle -- Installed prompt caches in the sibling `specfact-cli` environment still include backlog prompts such as `specfact.backlog-add.md`, `specfact.backlog-daily.md`, `specfact.backlog-refine.md`, and `specfact.sync-backlog.md`. -- Those files are not present in the current canonical source tree under `specfact-cli/resources/prompts`, so they are treated as historical residue rather than the current migration source of truth for this change. +Observed from archived ownership and prompt-migration work in the sibling `specfact-cli` repository: + +- `specfact.backlog-add.md` +- `specfact.backlog-daily.md` +- `specfact.backlog-refine.md` +- `specfact.sync-backlog.md` + +Audit result: + +- These prompts are backlog-bundle-owned slash-command resources and must be restored into `packages/specfact-backlog/resources/prompts/`. +- Their absence from the current live `specfact-cli/resources/prompts` tree does not remove the ownership requirement; it only means the canonical text must be recovered from history/change artifacts before packaging. +- Packaging coverage for backlog is incomplete until both the prompt inventory and the workspace-template seed set are present in published `specfact-backlog` artifacts. ### Backlog workspace-template seeds still living in `specfact-cli` @@ -71,4 +81,5 @@ These are already package-owned and are not migration inputs from the core resou 1. `packaging-01` must explicitly cover the prompt inventory above, not just “move prompts to corresponding bundles.” 2. Prompt companion files are part of the prompt payload contract because exported prompts reference them by relative path. 3. Backlog template migration must include the entire workspace seed set used by init/install flows, including `github_custom.yaml`. -4. Docs changes `docs-08` through `docs-12` need to describe bundle-owned prompts/templates and reject stale core-owned path references so no separate docs change is required. +4. Backlog prompt migration must restore the full slash-prompt inventory into the backlog bundle root so installed-module prompt discovery can surface `nold-ai/specfact-backlog`. +5. Docs changes `docs-08` through `docs-12` need to describe bundle-owned prompts/templates and reject stale core-owned path references so no separate docs change is required. diff --git a/openspec/changes/packaging-01-bundle-resource-payloads/TDD_EVIDENCE.md b/openspec/changes/packaging-01-bundle-resource-payloads/TDD_EVIDENCE.md index 75e9bad..f1cfb7f 100644 --- a/openspec/changes/packaging-01-bundle-resource-payloads/TDD_EVIDENCE.md +++ b/openspec/changes/packaging-01-bundle-resource-payloads/TDD_EVIDENCE.md @@ -3,14 +3,40 @@ ## Failing-first (design) - Added `tests/unit/test_bundle_resource_payloads.py` asserting stable `packages//resources/...` paths for audited prompts, `shared/cli-enforcement.md`, backlog field-mapping seeds (including non-ADO `github_custom.yaml`), integrity payload sensitivity to resource edits, and version-bump helper behavior. +- 2026-03-26: `python3 -m pytest tests/unit/test_bundle_resource_payloads.py -q` — failed as expected after extending coverage to backlog prompts. + - Missing source payload: `packages/specfact-backlog/resources/prompts/specfact.backlog-add.md` + - Missing module-root discovery path: `packages/specfact-backlog/resources/prompts/specfact.backlog-refine.md` + - Missing artifact payload entries under `specfact-backlog/resources/prompts/` ## Passing (implementation) - 2026-03-25: `python -m pytest tests/unit/test_bundle_resource_payloads.py` — 9 passed. +- 2026-03-26: `python3 -m pytest tests/unit/test_bundle_resource_payloads.py -q` — 13 passed. +- `packages/specfact-backlog/resources/prompts/` now ships the restored backlog prompt inventory: `specfact.backlog-add.md`, `specfact.backlog-daily.md`, `specfact.backlog-refine.md`, and `specfact.sync-backlog.md`. +- `packages/specfact-backlog/resources/prompts/shared/cli-enforcement.md` now ships with the backlog prompt payload so restored relative links resolve after export. +- `packages/specfact-backlog/module-package.yaml` was bumped to `0.41.15` and re-signed after the prompt payload changed. - Bundles now ship resources under each module package root (`resources/prompts`, `resources/templates/backlog/field_mappings`) with a mirror under `src/specfact_backlog/resources/...` for `find_package_resources_path("specfact_backlog", ...)`. - `specfact_backlog.backlog.field_mapping_resources.load_ado_framework_template_config` prefers backlog bundle + module-root paths before legacy `specfact_cli` templates (logic extracted from `commands.py` for clarity and reviewability). - Docs: `docs/authoring/publishing-modules.md` documents bundle-owned `resources/` and version/signature expectations. +## Artifact verification (task 4.6) + +- 2026-03-26: built a workflow-shaped backlog artifact at `/tmp/specfact-backlog-0.41.15.tar.gz` using the same inclusion/exclusion rules as `.github/workflows/publish-modules.yml`. +- Verified the archive contains: + - `specfact-backlog/resources/prompts/specfact.backlog-add.md` + - `specfact-backlog/resources/prompts/specfact.backlog-daily.md` + - `specfact-backlog/resources/prompts/specfact.backlog-refine.md` + - `specfact-backlog/resources/prompts/specfact.sync-backlog.md` + - `specfact-backlog/resources/prompts/shared/cli-enforcement.md` + +## Cross-repo prompt discovery/export verification (task 4.7) + +- Added `test_core_prompt_discovery_finds_installed_backlog_bundle` in `tests/unit/test_bundle_resource_payloads.py`. +- The test copies `packages/specfact-backlog/` into a temporary install-like module root, patches installed `specfact_cli.utils.ide_setup._module_discovery_roots(...)`, and verifies: + - `discover_prompt_sources_catalog(...)` includes `nold-ai/specfact-backlog` + - the discovered prompt set includes the restored backlog prompt filenames + - installed core export writes IDE prompt files for the backlog source segment and matches `expected_ide_prompt_export_paths(...)` + ## packaging-02 path contract (task 4.2) - **Module install layout**: `.specfact/modules//resources/prompts`, `.specfact/modules//resources/templates/backlog/field_mappings` (module root = directory containing `module-package.yaml`). diff --git a/openspec/changes/packaging-01-bundle-resource-payloads/design.md b/openspec/changes/packaging-01-bundle-resource-payloads/design.md index 8f3547d..a94ed28 100644 --- a/openspec/changes/packaging-01-bundle-resource-payloads/design.md +++ b/openspec/changes/packaging-01-bundle-resource-payloads/design.md @@ -6,6 +6,7 @@ That split is no longer coherent after bundle extraction: - prompts belong to the workflow bundles, not to core lifecycle commands - backlog field mapping templates belong to the backlog bundle, not to core +- backlog slash prompts also belong to the backlog bundle, but the active modules-side packaging change only enforced prompt payloads for codebase/project/spec/govern while backlog was reduced to field-mapping templates - prompt companion assets such as `shared/cli-enforcement.md` travel with those prompts and must remain resolvable after export - installed bundle packages need a stable on-disk resource layout so core CLI can discover resources from installed bundles rather than from fallback core directories @@ -18,8 +19,10 @@ The audit result is explicit: the official bundle package directories in `specfa - package prompt templates inside the owning official bundles - package prompt companion assets required by those prompts, starting with `resources/prompts/shared/cli-enforcement.md` - package other module-owned resources that still live in core, beginning with the full backlog field mapping template seed set +- package the restored backlog slash-prompt inventory under the backlog bundle root so `specfact init ide` can rediscover it from installed modules - standardize a resource layout that can be discovered from installed bundle roots - prove that signing and verification remain resource-aware and fail when bundled resources change without a version/signature update +- prove that the packaged backlog prompt layout is sufficient for the current core prompt-catalog logic without additional core discovery changes **Non-Goals:** @@ -39,6 +42,15 @@ Each official bundle should carry a `resources/` subtree inside its package dire This keeps installation, packaging, signature, and ownership boundaries aligned. +For backlog specifically, the authoritative prompt payload must live at the module root under: + +- `packages/specfact-backlog/resources/prompts/specfact.backlog-add.md` +- `packages/specfact-backlog/resources/prompts/specfact.backlog-daily.md` +- `packages/specfact-backlog/resources/prompts/specfact.backlog-refine.md` +- `packages/specfact-backlog/resources/prompts/specfact.sync-backlog.md` + +This is the path the current core prompt discovery contract already scans in installed module roots. Mirroring under `src/specfact_backlog/resources/...` is optional unless a runtime helper later requires package-local resource lookup, but it is not sufficient by itself for `specfact init ide`. + ### 2. Treat resource changes as bundle payload changes The modules repo already computes integrity hashes from full module payloads. This change will preserve that behavior and add tests/documentation so resource additions and edits are explicitly covered by version-bump and signature workflows. @@ -47,19 +59,27 @@ The modules repo already computes integrity hashes from full module payloads. Th The bundle packages should expose resources through a stable package-root layout rather than through bespoke manifest-only indirection. Core CLI can then discover `resources/prompts`, prompt companion files under the same prompt root, or other agreed subpaths from installed bundle roots. +For verification, this change must be testable against the current `specfact_cli.utils.ide_setup.discover_prompt_sources_catalog(...)` behavior. If a bundle packages prompt files somewhere else, the change is incomplete even if the tarball technically contains markdown files. + +### 4. Recover backlog prompt sources from historical canonical content + +The current `specfact-cli` tree no longer contains the backlog prompt files in `resources/prompts/`, so the implementation cannot copy from the live core working tree. The prompt payload must instead be restored from the last valid prompt content preserved in `specfact-cli` history and archived change artifacts, then imported into the backlog bundle as the new canonical packaged source. + ## Risks / Trade-offs - `[Resource ownership audit misses a leftover core-owned file]` -> record the audited inventory and explicit keep-in-core list in a change-local audit artifact. - `[Prompts copy successfully but relative includes break]` -> treat prompt companion files as part of the prompt payload contract, not optional extras. - `[Bundle packages gain more non-code files]` -> accept slightly larger artifacts in exchange for correct ownership and install behavior. - `[Core and modules repos drift on expected resource paths]` -> keep the path contract explicit and cross-reference the specfact-cli packaging change. +- `[Backlog prompts are reconstructed from stale or partial source text]` -> verify restored filenames and semantics against archived `specfact-cli` change artifacts before publishing. ## Migration Plan 1. Audit which current core resources are bundle-owned. -2. Move prompt templates, prompt companion assets, and backlog field mapping templates into the owning bundle packages. -3. Add tests for package-resource presence and integrity/version-bump enforcement. -4. Update docs/manifests as needed and sync dependency notes back to the core packaging change. +2. Rebase the packaging worktree to `origin/dev` and restore backlog prompt templates from historical canonical sources into the backlog bundle root. +3. Move prompt templates, prompt companion assets, and backlog field mapping templates into the owning bundle packages. +4. Add tests for package-resource presence, published artifact contents, installed-module discovery, and integrity/version-bump enforcement. +5. Update docs/manifests as needed, re-sign the backlog bundle after version bump, and sync dependency notes back to the core packaging change. ## Open Questions diff --git a/openspec/changes/packaging-01-bundle-resource-payloads/proposal.md b/openspec/changes/packaging-01-bundle-resource-payloads/proposal.md index 96ae498..8921db7 100644 --- a/openspec/changes/packaging-01-bundle-resource-payloads/proposal.md +++ b/openspec/changes/packaging-01-bundle-resource-payloads/proposal.md @@ -6,10 +6,12 @@ Bundle-owned prompt templates and other workflow resources still live in the cor - Add bundle-packaged resource payloads for official bundles so prompts and other module-owned assets ship from the bundle that owns them. - Move workflow prompt templates out of `specfact-cli/resources/prompts` into the corresponding bundle packages in `specfact-cli-modules`. +- Explicitly restore the backlog slash-prompt inventory in `specfact-backlog`, including `specfact.backlog-add.md`, `specfact.backlog-daily.md`, `specfact.backlog-refine.md`, and `specfact.sync-backlog.md`, so installed-module prompt discovery can surface backlog workflows again. - Move any other module-owned assets that still live in core, starting with backlog field mapping templates, into the owning bundle package. - Preserve prompt companion assets such as `resources/prompts/shared/cli-enforcement.md` so exported prompts do not ship broken relative references. - Audit and migrate the complete backlog workspace-template seed set required by init/install flows, not just `ado_*.yaml`. - Define and test a consistent package layout for bundle resources so the core CLI can discover them from installed bundle locations. +- Add cross-repo verification requirements that prove `specfact init ide` can discover backlog prompt resources from an installed `nold-ai/specfact-backlog` module root after the bundle is published/installed. - Lock resource payloads into signing, verification, and publish/version-bump workflows so bundle updates are resource-aware. - Keep `specfact-cli` runtime discovery, source selection, and `specfact init ide` export orchestration out of scope here; that work is tracked in `specfact-cli` change `init-ide-prompt-source-selection` (`nold-ai/specfact-cli#382`). diff --git a/openspec/changes/packaging-01-bundle-resource-payloads/specs/bundle-packaged-resources/spec.md b/openspec/changes/packaging-01-bundle-resource-payloads/specs/bundle-packaged-resources/spec.md index dadf4c9..318ef8b 100644 --- a/openspec/changes/packaging-01-bundle-resource-payloads/specs/bundle-packaged-resources/spec.md +++ b/openspec/changes/packaging-01-bundle-resource-payloads/specs/bundle-packaged-resources/spec.md @@ -6,7 +6,12 @@ Each official bundle package SHALL include the prompt templates and other non-co #### Scenario: Official bundles ship the audited prompt inventory - **WHEN** the audited prompt inventory from `RESOURCE_OWNERSHIP_AUDIT.md` is inspected - **THEN** each prompt template's canonical packaged source exists under the owning official bundle package -- **AND** the ownership mapping covers at least the codebase, project, spec, and govern bundles for the currently exported core prompt set +- **AND** the ownership mapping covers the codebase, project, spec, govern, and backlog bundles for the currently supported prompt set + +#### Scenario: Backlog bundle ships the restored slash-prompt inventory +- **WHEN** the backlog bundle package is inspected from source or from an installed artifact +- **THEN** `resources/prompts/` contains `specfact.backlog-add.md`, `specfact.backlog-daily.md`, `specfact.backlog-refine.md`, and `specfact.sync-backlog.md` +- **AND** those prompt files are treated as canonical bundle-owned sources rather than historical leftovers #### Scenario: Prompt companion resources ship with prompt payloads - **WHEN** an exported prompt template references a companion file by relative path, such as `./shared/cli-enforcement.md` diff --git a/openspec/changes/packaging-01-bundle-resource-payloads/specs/resource-aware-integrity/spec.md b/openspec/changes/packaging-01-bundle-resource-payloads/specs/resource-aware-integrity/spec.md index 0da017f..abe00ed 100644 --- a/openspec/changes/packaging-01-bundle-resource-payloads/specs/resource-aware-integrity/spec.md +++ b/openspec/changes/packaging-01-bundle-resource-payloads/specs/resource-aware-integrity/spec.md @@ -18,6 +18,12 @@ Bundled resources SHALL live at stable paths inside the bundle package so that t - **WHEN** the core CLI inspects an installed official bundle package - **THEN** the bundle contains a stable prompt resource path that can be discovered without scanning the core CLI repository +#### Scenario: Installed backlog bundle contributes prompt source catalog entries + +- **WHEN** `nold-ai/specfact-backlog` is installed under an effective module root with the packaged backlog prompt files +- **THEN** core prompt-source discovery includes `nold-ai/specfact-backlog` as a prompt source +- **AND** `specfact init ide` can export the backlog prompt filenames from that installed module root + #### Scenario: Core resolves prompt companion resources with exported prompts - **WHEN** the core CLI exports a prompt that depends on a companion file such as `shared/cli-enforcement.md` - **THEN** the companion resource is discoverable from the same installed bundle root diff --git a/openspec/changes/packaging-01-bundle-resource-payloads/tasks.md b/openspec/changes/packaging-01-bundle-resource-payloads/tasks.md index ce24552..677a52c 100644 --- a/openspec/changes/packaging-01-bundle-resource-payloads/tasks.md +++ b/openspec/changes/packaging-01-bundle-resource-payloads/tasks.md @@ -14,6 +14,9 @@ - [x] 2.4 Add failing tests for integrity/version-bump enforcement when bundled resources change. - [x] 2.5 Add failing tests or fixtures that exercise the stable resource paths expected by `specfact init ide` and related copy flows. - [x] 2.6 Record failing evidence in `TDD_EVIDENCE.md`. +- [x] 2.7 Add failing tests that assert `specfact-backlog` packages the backlog slash-prompt inventory at `resources/prompts/` with the expected filenames. +- [x] 2.8 Add failing artifact-content checks that confirm published `specfact-backlog-*.tar.gz` archives contain `resources/prompts/...` entries, not only field-mapping templates. +- [x] 2.9 Add or update a cross-repo verification slice that proves core prompt discovery includes `nold-ai/specfact-backlog` when an installed module root contains backlog prompt resources. ## 3. Bundle Resource Migration @@ -21,6 +24,10 @@ - [x] 3.2 Move prompt companion assets needed by migrated prompts into the stable bundle prompt layout. - [x] 3.3 Move backlog field mapping templates and any other audited bundle-owned resources into the owning bundle packages. - [x] 3.4 Update manifests, package data, and publish-time expectations so the resources are included in released bundle artifacts. +- [x] 3.5 Rebase the implementation worktree `feature/packaging-01-bundle-resource-payloads` onto `origin/dev` before editing payload files or tests. +- [x] 3.6 Restore the canonical backlog prompt sources into `packages/specfact-backlog/resources/prompts/` from the last valid backlog prompt content in `specfact-cli` history/change artifacts. +- [x] 3.7 If any restored backlog prompt uses relative companion assets, package those companions under the same backlog prompt root and verify the references remain valid after export. +- [x] 3.8 Bump `packages/specfact-backlog/module-package.yaml` version and refresh integrity metadata because prompt resources are part of the signed module payload. ## 4. Validation @@ -29,3 +36,6 @@ - [x] 4.3 Update docs or package guidance for bundle-owned resources and publish/version-bump expectations. - [x] 4.4 Confirm docs changes `docs-08` through `docs-12` absorb the user-facing documentation fallout from migrated resources so no extra docs change is required. - [x] 4.5 Run `openspec validate packaging-01-bundle-resource-payloads --strict`. +- [x] 4.6 Rebuild or inspect the published backlog artifact and verify it contains `resources/prompts/specfact.backlog-add.md`, `specfact.backlog-daily.md`, `specfact.backlog-refine.md`, and `specfact.sync-backlog.md`. +- [x] 4.7 Verify in `specfact-cli` that installed-module prompt discovery surfaces `nold-ai/specfact-backlog` from an installed module root and that `specfact init ide` can export the restored backlog prompts. +- [x] 4.8 Record both modules-repo and cross-repo verification evidence in `TDD_EVIDENCE.md`. diff --git a/packages/specfact-backlog/module-package.yaml b/packages/specfact-backlog/module-package.yaml index 3c7f86d..86cc202 100644 --- a/packages/specfact-backlog/module-package.yaml +++ b/packages/specfact-backlog/module-package.yaml @@ -1,5 +1,5 @@ name: nold-ai/specfact-backlog -version: 0.41.14 +version: 0.41.16 commands: - backlog tier: official @@ -17,5 +17,5 @@ schema_extensions: project_metadata: - backlog_core.backlog_config integrity: - checksum: sha256:edda19b7862d2fc6f4510d148b71e7f74e5d3ff18cc4852018b0912592729c58 - signature: eQ86/44q9ZKUkkNIv/HzyZ7Oc3QXPyPZLpFtoMMTF5QjWWbE1RFntJs/8BWKHniV23iJnRUBlBnm4YIP30UQAQ== + checksum: sha256:8edbb9cad806a60976bfb5348f9a24863d71305583feb140d3af37c45ba29e59 + signature: X7hk6nWlHPGKyjXwxY04BQzfMwq5Bh2h/qnTZZb8l6sqGdC9mPLvmoMMNkrZ422wP5UuPSgTJSAu04tAyWcoAw== diff --git a/packages/specfact-backlog/resources/prompts/shared/cli-enforcement.md b/packages/specfact-backlog/resources/prompts/shared/cli-enforcement.md new file mode 100644 index 0000000..05e2227 --- /dev/null +++ b/packages/specfact-backlog/resources/prompts/shared/cli-enforcement.md @@ -0,0 +1,119 @@ +# CLI Usage Enforcement Rules + +## Core Principle + +**ALWAYS use SpecFact CLI commands. Never create artifacts directly.** + +## CLI vs LLM Capabilities + +### CLI-Only Operations (CI/CD Mode - No LLM Required) + +The CLI can perform these operations **without LLM**: + +- ✅ Tool execution (ruff, pylint, basedpyright, mypy, semgrep, specmatic) +- ✅ Bundle management (create, load, save, validate structure) +- ✅ Metadata management (timestamps, hashes, telemetry) +- ✅ Planning operations (init, add-feature, add-story, update-idea, update-feature) +- ✅ AST/Semgrep-based analysis (code structure, patterns, relationships) +- ✅ Specmatic validation (OpenAPI/AsyncAPI contract validation) +- ✅ Format validation (YAML/JSON schema compliance) +- ✅ Source tracking and drift detection + +**CRITICAL LIMITATIONS**: + +- ❌ **CANNOT generate code** - No LLM available in CLI-only mode +- ❌ **CANNOT do reasoning** - No semantic understanding without LLM + +### LLM-Required Operations (AI IDE Mode - Via Slash Prompts) + +These operations **require LLM** and are only available via AI IDE slash prompts: + +- ✅ Code generation (requires LLM reasoning) +- ✅ Code enhancement (contracts, refactoring, improvements) +- ✅ Semantic understanding (business logic, context, priorities) +- ✅ Plan enrichment (missing features, confidence adjustments, business context) +- ✅ Code reasoning (why decisions were made, trade-offs, constraints) + +**Access**: Only available via AI IDE slash prompts (Cursor, CoPilot, etc.) +**Pattern**: Slash prompt → LLM generates → CLI validates → Apply if valid + +## LLM Grounding Rules + +- Treat CLI artifacts as the source of truth for keys, structure, and metadata. +- Scan the codebase only when asked to infer missing behavior/context or explain deviations; respect `--entry-point` scope when provided. +- Use codebase findings to propose updates via CLI (enrichment report, plan update commands), never to rewrite artifacts directly. + +## Rules + +1. **Execute CLI First**: Always run CLI commands before any analysis +2. **Use CLI for Writes**: All write operations must go through CLI +3. **Read for Display Only**: Use file reading tools for display/analysis only +4. **Never Modify .specfact/**: Do not create/modify files in `.specfact/` directly +5. **Never Bypass Validation**: CLI ensures schema compliance and metadata +6. **Code Generation Requires LLM**: Code generation is only possible via AI IDE slash prompts, not CLI-only + +## Standard Validation Loop Pattern (For LLM-Generated Code) + +When generating or enhancing code via LLM, **ALWAYS** follow this pattern: + +```text +1. CLI Prompt Generation (Required) + ↓ + CLI generates structured prompt → saved to .specfact/prompts/ + (e.g., `generate contracts-prompt`, future: `generate code-prompt`) + +2. LLM Execution (Required - AI IDE Only) + ↓ + LLM reads prompt → generates enhanced code → writes to TEMPORARY file + (NEVER writes directly to original artifacts) + Pattern: `enhanced_.py` or `generated_.py` + +3. CLI Validation Loop (Required, up to N retries) + ↓ + CLI validates temp file with all relevant tools: + - Syntax validation (py_compile) + - File size check (must be >= original) + - AST structure comparison (preserve functions/classes) + - Contract imports verification + - Code quality checks (ruff, pylint, basedpyright, mypy) + - Test execution (contract-test, pytest) + ↓ + If validation fails: + - CLI provides detailed error feedback + - LLM fixes issues in temp file + - Re-validate (max 3 attempts) + ↓ + If validation succeeds: + - CLI applies changes to original file + - CLI removes temporary file + - CLI updates metadata/telemetry +``` + +**This pattern must be used for**: + +- ✅ Contract enhancement (`generate contracts-prompt` / `contracts-apply`) - Already implemented +- ⏳ Code generation (future: `generate code-prompt` / `code-apply`) - Needs implementation +- ⏳ Plan enrichment (future: `plan enrich-prompt` / `enrich-apply`) - Needs implementation +- ⏳ Any LLM-enhanced artifact modification - Needs implementation + +## What Happens If You Don't Follow + +- ❌ Artifacts may not match CLI schema versions +- ❌ Missing metadata and telemetry +- ❌ Format inconsistencies +- ❌ Validation failures +- ❌ Works only in Copilot mode, fails in CI/CD +- ❌ Code generation attempts in CLI-only mode will fail (no LLM available) + +## Available CLI Commands + +- `specfact plan init ` - Initialize project bundle +- `specfact plan select ` - Set active plan (used as default for other commands) +- `specfact code import [] --repo ` - Import from codebase (uses active plan if bundle not specified) +- `specfact plan review []` - Review plan (uses active plan if bundle not specified) +- `specfact plan harden []` - Create SDD manifest (uses active plan if bundle not specified) +- `specfact enforce sdd []` - Validate SDD (uses active plan if bundle not specified) +- `specfact sync bridge --adapter --repo ` - Sync with external tools +- See [Command Reference](https://docs.specfact.io/reference/commands/) for full list + +**Note**: Most commands now support active plan fallback. If `--bundle` is not specified, commands automatically use the active plan set via `plan select`. This improves workflow efficiency in AI IDE environments. diff --git a/packages/specfact-backlog/resources/prompts/specfact.backlog-add.md b/packages/specfact-backlog/resources/prompts/specfact.backlog-add.md new file mode 100644 index 0000000..8442789 --- /dev/null +++ b/packages/specfact-backlog/resources/prompts/specfact.backlog-add.md @@ -0,0 +1,90 @@ +--- +description: "Create backlog items with guided interactive flow and hierarchy checks" +--- + +# SpecFact Backlog Add Command + +## User Input + +```text +$ARGUMENTS +``` + +You **MUST** consider the user input before proceeding (if not empty). + +## Purpose + +Create a new backlog item in GitHub or Azure DevOps using the `specfact backlog add` workflow. The command supports interactive prompts, parent hierarchy validation, DoR checks, and provider-specific fields. + +**When to use:** Adding new work items (epic/feature/story/task/bug) with consistent quality and parent-child structure. + +**Quick:** `/specfact.backlog-add --adapter github --project-id owner/repo --type story --title "..."` + +## Parameters + +### Required + +- `--adapter ADAPTER` - Backlog adapter (`github`, `ado`) +- `--project-id PROJECT` - Project context + - GitHub: `owner/repo` + - ADO: `org/project` + +### Common options + +- `--type TYPE` - Backlog item type (provider/template specific) +- `--title TITLE` - Item title +- `--body BODY` - Item body/description +- `--parent PARENT_ID` - Optional parent issue/work item id +- `--non-interactive` - Disable prompt flow and require explicit inputs +- `--check-dor` - Run Definition of Ready checks before create +- `--template TEMPLATE` - Optional backlog template override +- `--custom-config PATH` - Optional mapping/config override file + +### Adapter-specific options + +- GitHub: + - `--repo-owner OWNER` + - `--repo-name NAME` + - `--github-token TOKEN` (or `GITHUB_TOKEN`) +- Azure DevOps: + - `--ado-org ORG` + - `--ado-project PROJECT` + - `--ado-token TOKEN` (or `AZURE_DEVOPS_TOKEN`) + - `--ado-base-url URL` (optional) + +## Workflow + +### Step 1: Execute command + +Run the CLI command with user arguments: + +```bash +specfact backlog add [OPTIONS] +``` + +### Step 2: Interactive completion (if inputs are missing) + +- Prompt for missing required fields. +- Prompt for optional quality fields (acceptance criteria, points, priority) when supported. +- Validate parent selection and allowed hierarchy before create. + +### Step 3: Confirm and create + +- Show planned create payload summary. +- Execute provider create operation. +- Return created item id/key/url. + +## CLI Enforcement + +- Always execute `specfact backlog add` for creation. +- Do not create provider issues/work items directly outside CLI unless user explicitly requests a manual path. + +## Input Contract + +- This command does not use `--export-to-tmp`/`--import-from-tmp` artifacts. +- Provide values through CLI options or interactive prompts; do not fabricate external tmp-file schemas. +- Do not ask Copilot to output `## Item N:` sections, `**ID**` labels, or markdown tmp files for this command. + +## Context + +{ARGS} diff --git a/packages/specfact-backlog/resources/prompts/specfact.backlog-daily.md b/packages/specfact-backlog/resources/prompts/specfact.backlog-daily.md new file mode 100644 index 0000000..e467543 --- /dev/null +++ b/packages/specfact-backlog/resources/prompts/specfact.backlog-daily.md @@ -0,0 +1,125 @@ +--- +description: "Daily standup and sprint review with story-by-story walkthrough" +--- + +# SpecFact Daily Standup Command + +## User Input + +```text +$ARGUMENTS +``` + +You **MUST** consider the user input before proceeding (if not empty). + +## Purpose + +Run a daily standup view and optional interactive walkthrough of backlog items (GitHub Issues, Azure DevOps work items) with the DevOps team: list items in scope, review story-by-story, highlight current focus, surface issues or open questions, and allow adding discussion notes as annotation comments on the issue. + +**When to use:** Daily standup, sprint review, quick status sync with the team. + +**Quick:** `/specfact.daily` or `/specfact.backlog-daily` with optional adapter and filters. From a clone, org/repo or org/project are auto-detected from git remote. + +## Parameters + +### Required + +- `ADAPTER` - Backlog adapter name (github, ado, etc.) + +### Adapter Configuration (same as backlog-refine) + +**GitHub:** `--repo-owner`, `--repo-name`, `--github-token` (optional). +**Azure DevOps:** `--ado-org`, `--ado-project`, `--ado-team` (optional), `--ado-base-url`, `--ado-token` (optional). + +When run from a **clone**, org/repo or org/project are inferred from `git remote get-url origin`; no need to pass them unless overriding. + +### Filters + +- `--state STATE` - Filter by state (e.g. open, Active). Use `--state any` to disable state filtering. +- `--assignee USERNAME` or `--assignee me` - Filter by assignee. Use `--assignee any` to disable assignee filtering. +- `--search QUERY` - Provider-specific search query +- `--release RELEASE` - Filter by release identifier +- `--id ISSUE_ID` - Filter to one exact backlog item ID +- `--sprint SPRINT` / `--iteration PATH` - Filter by sprint/iteration (e.g. `current`) +- `--limit N` - Max items (default 20) +- `--first-issues N` / `--last-issues N` - Optional issue window (oldest/newest by numeric ID, mutually exclusive) +- `--blockers-first` - Sort items with blockers first +- `--show-unassigned` / `--unassigned-only` - Include or show only unassigned items + +### Daily-Specific Options + +- `--interactive` - Step-by-step review: select items with arrow keys, view full detail (refine-like) and **existing comments** on each issue +- `--copilot-export PATH` - Write summarized progress per story to a file for Copilot slash-command use +- `--summarize` - Output a prompt (instruction + filter context + standup data) to **stdout** for Copilot or slash command to generate a standup summary +- `--summarize-to PATH` - Write the same summarize prompt to a **file** +- `--comments` / `--annotations` - Include descriptions and comments in `--copilot-export` and summarize output +- `--first-comments N` / `--last-comments N` - Optional comment window for export/summarize outputs (`--comments`); default includes all comments +- `--suggest-next` - In interactive mode, show suggested next item by value score +- `--post` with `--yesterday`, `--today`, `--blockers` - Post a standup comment to the first item's issue (when adapter supports comments) +- Interactive navigation action `Post standup update` - Post yesterday/today/blockers to the currently selected story during `--interactive` walkthrough + +## Workflow + +### Step 1: Run Daily Standup + +Execute the CLI with adapter and optional filters: + +```bash +specfact backlog daily $ADAPTER [--state open] [--sprint current] [--assignee me] [--limit 20] +``` + +Or use the slash command with arguments: `/specfact.backlog-daily --adapter ado --sprint current` + +**What you see:** A standup table (assigned items) and a "Pending / open for commitment" table (unassigned items in scope). Each row shows ID, title, status, last updated, and optional yesterday/today/blockers from the item body. + +### Step 2: Interactive Story-by-Story Review (with DevOps team) + +When the user runs **`--interactive`** (or the slash command drives an interactive flow): + +1. **For each story** (one at a time): + - **Present** the item: ID, title, status, assignees, last updated, description, acceptance criteria, standup fields (yesterday/today/blockers), and the **latest existing comment** (when the adapter supports fetching comments). + - **Interactive comment scope**: If older comments exist, explicitly mention the count of hidden comments and guide users to export options for full context. + - **Highlight current focus**: What is the team member working on? What is the next intended step? + - **Surface issues or open questions**: Blockers, ambiguities, dependencies, or decisions needed. + - **Allow discussion notes**: If the team agrees, suggest or add a **comment** on the issue (e.g. "Standup YYYY-MM-DD: …" or "Discussion: …") so the discussion is captured as an annotation. Only add comments when the user explicitly approves (e.g. "add that as a comment"). + - If in CLI interactive navigation, use **Post standup update** to write the note to the selected story directly. + - **Move to next** only when the team is done with this story (e.g. "next", "done"). + +2. **Rules**: + - Do not update the backlog item body or title unless the user asks for a refinement (use `specfact backlog refine` for that). + - Comments are for **discussion notes** and standup updates; keep them short and actionable. + - If the adapter does not support comments, report that clearly and skip adding comments. + +3. **Navigation**: After each story, offer "Next story", "Previous story", "Back to list", "Exit" (or equivalent) so the team can move through the list without re-running the command. + +### Step 3: Generate Standup Summary (optional) + +When the user has run `specfact backlog daily ... --summarize` or `--summarize-to PATH`, the output is a **prompt** containing: + +- A short instruction: generate a concise daily standup summary from the following data. +- Filter context (adapter, state, sprint, assignee, limit). +- Per-item data (same as `--copilot-export`: ID, title, status, assignees, last updated, progress, blockers). + +**Use this output** by pasting it into Copilot or invoking the slash command `specfact.daily` with this context, so the AI can produce a short narrative summary (e.g. "Today's standup: 3 in progress, 1 blocked, 2 pending commitment …"). + +## Comments on Issues + +- **Interactive detail view** shows only the **latest comment** plus a hint when additional comments exist, to keep standup readable. +- **Full comment context**: use `--copilot-export --comments` or `--summarize --comments` (optional `--first-comments N` / `--last-comments N`) to include full or scoped comment history. +- **Adding comments**: When the team agrees to record a discussion note or standup update, add it as a comment on the issue (via `--post` for first-item standup lines or interactive **Post standup update** for selected stories). Do not invent comments; only suggest or add when the user approves. + +## CLI Enforcement + +- Execute `specfact backlog daily` (or equivalent) first; use its output as context. +- Use `--interactive` for story-by-story walkthrough; use `--summarize` or `--summarize-to` when a standup summary prompt is needed. +- Use `--copilot-export` when you need a file of item summaries for reference during standup. + +## Output Contract + +- This command does not support `--import-from-tmp`; do not invent a tmp import schema. +- Do not instruct Copilot to produce `## Item N:` blocks or `**ID**`/`**Body**` tmp artifacts for this command. +- If you write `--copilot-export` or `--summarize-to` artifacts, keep item sections and IDs unchanged from CLI output. + +## Context + +{ARGS} diff --git a/packages/specfact-backlog/resources/prompts/specfact.backlog-refine.md b/packages/specfact-backlog/resources/prompts/specfact.backlog-refine.md new file mode 100644 index 0000000..2d6f83f --- /dev/null +++ b/packages/specfact-backlog/resources/prompts/specfact.backlog-refine.md @@ -0,0 +1,557 @@ +--- +description: "Refine backlog items using template-driven AI assistance" +--- + +# SpecFact Backlog Refinement Command + +## User Input + +```text +$ARGUMENTS +``` + +You **MUST** consider the user input before proceeding (if not empty). + +## Purpose + +Refine backlog items from DevOps tools (GitHub Issues, Azure DevOps, etc.) into structured, template-compliant work items using AI-assisted refinement with template detection and validation. + +**When to use:** Standardizing backlog items, enforcing corporate templates (user stories, defects, spikes, enablers), preparing items for sprint planning. + +**Quick:** `/specfact.backlog-refine --adapter github --labels feature,enhancement` or `/specfact.backlog-refine --adapter ado --sprint "Sprint 1"` + +## Parameters + +### Required + +- `ADAPTER` - Backlog adapter name (github, ado, etc.) + +### Adapter Configuration (Required for GitHub/ADO) + +**GitHub Adapter:** + +- `--repo-owner OWNER` - GitHub repository owner (required for GitHub adapter) +- `--repo-name NAME` - GitHub repository name (required for GitHub adapter) +- `--github-token TOKEN` - GitHub API token (optional, uses GITHUB_TOKEN env var or gh CLI if not provided) + +**Azure DevOps Adapter:** + +- `--ado-org ORG` - Azure DevOps organization or collection name (required for ADO adapter, except when collection is in base_url) +- `--ado-project PROJECT` - Azure DevOps project (required for ADO adapter) +- `--ado-team TEAM` - Azure DevOps team name (optional, defaults to project name for iteration lookup) +- `--ado-base-url URL` - Azure DevOps base URL (optional, defaults to `https://dev.azure.com` for cloud) + - **Cloud**: `https://dev.azure.com` (default) + - **On-premise**: `https://server` or `https://server/tfs/collection` (if collection included) +- `--ado-token TOKEN` - Azure DevOps PAT (optional, uses AZURE_DEVOPS_TOKEN env var or stored token if not provided) + +**ADO Configuration Notes:** + +- **Cloud (Azure DevOps Services)**: Always requires `--ado-org` and `--ado-project`. Base URL defaults to `https://dev.azure.com`. +- **On-premise (Azure DevOps Server)**: + - If base URL includes collection (e.g., `https://server/tfs/DefaultCollection`), `--ado-org` is optional. + - If base URL doesn't include collection, provide collection name via `--ado-org`. +- **API Endpoints**: + - WIQL queries use POST to `{base_url}/{org}/{project}/_apis/wit/wiql?api-version=7.1` (project-level) + - Work items batch GET uses `{base_url}/{org}/_apis/wit/workitems?ids={ids}&api-version=7.1` (organization-level) + - The `api-version` parameter is **required** for all ADO API calls + +### Filters + +- `--labels LABELS` or `--tags TAGS` - Filter by labels/tags (comma-separated, e.g., "feature,enhancement") +- `--state STATE` - Filter by state (case-insensitive, e.g., "open", "closed", "Active", "New"). Use `--state any` to disable state filtering. +- `--assignee USERNAME` - Filter by assignee (case-insensitive): + - **GitHub**: Login or @username (e.g., "johndoe" or "@johndoe") + - **ADO**: displayName, uniqueName, or mail (e.g., "Jane Doe" or `"jane.doe@example.com"`) + - Use `--assignee any` to disable assignee filtering. +- `--iteration PATH` - Filter by iteration path (ADO format: "Project\\Sprint 1", case-insensitive) +- `--sprint SPRINT` - Filter by sprint (case-insensitive): + - **ADO**: Use full iteration path (e.g., "Project\\Sprint 1") to avoid ambiguity when multiple sprints share the same name + - If omitted, defaults to current active iteration for the team + - Ambiguous name-only matches will prompt for explicit iteration path +- `--release RELEASE` - Filter by release identifier (case-insensitive) +- `--limit N` - Maximum number of items to process in this refinement session (caps batch size) +- `--first-issues N` / `--last-issues N` - Process only the first or last N items after filters/refinement checks (mutually exclusive; sorted by numeric issue/work-item ID, lower=older, higher=newer) +- `--ignore-refined` / `--no-ignore-refined` - When set (default), exclude already-refined items so `--limit` applies to items that need refinement. Use `--no-ignore-refined` to process the first N items in order. +- `--id ISSUE_ID` - Refine only this backlog item (issue or work item ID). Other items are ignored. +- `--persona PERSONA` - Filter templates by persona (product-owner, architect, developer) +- `--framework FRAMEWORK` - Filter templates by framework (agile, scrum, safe, kanban) + +### Template Selection + +- `--template TEMPLATE_ID` or `-t TEMPLATE_ID` - Target template ID (default: auto-detect) +- `--auto-accept-high-confidence` - Auto-accept refinements with confidence >= 0.85 + +### Preview and Writeback + +- `--preview` / `--no-preview` - Preview mode: show what will be written without updating backlog (default: --preview) + - **Preview mode shows**: Full item details (title, body, metrics, acceptance_criteria, work_item_type, etc.) + - **Preview mode skips**: Interactive refinement prompts (use `--write` to enable interactive refinement) +- `--write` - Write mode: explicitly opt-in to update remote backlog (requires --write flag) + +### Export/Import for Copilot Processing + +- `--export-to-tmp` - Export backlog items to temporary file for copilot processing (default: `/tmp/specfact-backlog-refine-.md`) +- `--import-from-tmp` - Import refined content from temporary file after copilot processing (default: `/tmp/specfact-backlog-refine--refined.md`) +- `--tmp-file PATH` - Custom temporary file path (overrides default) +- `--first-comments N` / `--last-comments N` - Optional comment window for preview and write-mode prompt context (default preview shows last 2; write prompts include full comments by default) + +**Export/Import Workflow**: + +1. Export items: `specfact backlog refine --adapter github --export-to-tmp --repo-owner OWNER --repo-name NAME` +2. Process with copilot: Open exported file and follow the embedded `## Copilot Instructions` and per-item template guidance (`Target Template`, `Required Sections`, `Optional Sections`). Save as `-refined.md` +3. Import refined: `specfact backlog refine --adapter github --import-from-tmp --repo-owner OWNER --repo-name NAME --write` + +When refining from an exported file, treat the embedded instructions in that file as the source of truth for required structure and formatting. + +**Critical content-preservation rule**: +- Never summarize, shorten, or silently remove details from story content. +- Preserve all existing requirements, constraints, business value, and feature intent. +- Refinement must increase clarity/structure, not reduce scope/detail. + +**Exact tmp structure contract (`--import-from-tmp`)**: + +- Keep one section per item with this exact heading pattern: `## Item N: `. +- The first non-empty line of each item block MUST be the `## Item N: ...` heading. +- Keep and preserve these metadata labels exactly (order may vary; labels must match): + - `**ID**: <original exported id>` (**mandatory and unchanged**) + - `**URL**: <url>` + - `**State**: <state>` + - `**Provider**: <provider>` +- Keep body in this exact form (fence language must be `markdown`): + - `**Body**:` + - ```` ```markdown ... ``` ```` +- Optional parsed fields (if present) must use exact labels: + - `**Acceptance Criteria**:` + - `**Metrics**:` with lines containing `Story Points:`, `Business Value:`, `Priority:` +- Do not prepend explanatory text, summaries, or headers before the first `## Item N:` block. +- Do not rename labels (`**ID**`, `**Body**`, `**Acceptance Criteria**`, `**Metrics**`). + +Exact item example: + +````markdown +## Item 1: Improve backlog refine import mapping + +**ID**: 123 +**URL**: https://dev.azure.com/org/project/_workitems/edit/123 +**State**: Active +**Provider**: ado + +**Metrics**: +- Story Points: 5 +- Business Value: 8 +- Priority: 2 + +**Acceptance Criteria**: +- [ ] Mapping uses configured story points field + +**Body**: +```markdown +## Description + +Refined body content. +``` +```` + +If `**ID**` is missing or changed, import cannot map refined content to backlog items and writeback will fail. + +**Provider-specific body contract (critical)**: + +- **GitHub**: + - Keep template narrative sections in `**Body**` markdown (for example `## As a`, `## I want`, `## So that`, `## Acceptance Criteria` when required by template). + - Metrics may be in `**Metrics**` and/or body sections if your template expects body headings. +- **ADO**: + - Keep narrative/template sections in `**Body**` markdown. + - Keep structured metadata in `**Metrics**` (`Story Points`, `Business Value`, `Priority`). + - Do **not** add metadata-only headings (`## Story Points`, `## Business Value`, `## Priority`, `## Work Item Type`, `## Area Path`, `## Iteration Path`) inside body text. + - Do **not** duplicate `## Description` heading text into the narrative content. + +**Template-driven refinement method (mandatory)**: + +- Use exported `**Target Template**`, `**Required Sections**`, and `**Optional Sections**` as the authoritative contract for each item. +- Preserve all functional and non-functional requirements; never silently drop details. +- Improve clarity, specificity, and testability (SMART-style) without scope reduction. +- If one story is too large, propose split candidates in `## Notes`; do not remove detail from the original item silently. + +**What to include / exclude boundaries**: + +- Include: + - All original business intent, user value, constraints, assumptions, dependencies, and acceptance signals. + - Explicit acceptance criteria and measurable outcomes. +- Exclude: + - Generic summaries that replace detailed requirements. + - Placeholder text (`unspecified`, `TBD`, `no info`) when original detail exists. + - Extra wrapper prose outside `## Item N:` blocks. + +One-shot GitHub scaffold example: + +````markdown +## Item 1: Improve authentication flow + +**ID**: 42 +**URL**: https://github.com/org/repo/issues/42 +**State**: open +**Provider**: github + +**Metrics**: +- Story Points: 8 +- Business Value: 13 +- Priority: 2 + +**Acceptance Criteria**: +- [ ] Token refresh handles expiry and retry behavior + +**Body**: +```markdown +## As a +platform user + +## I want +reliable authentication and token refresh behavior + +## So that +I can access protected resources without disruption + +## Acceptance Criteria +- [ ] Valid refresh token rotates and issues new access token +- [ ] Expired/invalid token returns clear error and audit event +``` +```` + +One-shot ADO scaffold example: + +````markdown +## Item 2: Harden login reliability + +**ID**: 108 +**URL**: https://dev.azure.com/org/project/_workitems/edit/108 +**State**: Active +**Provider**: ado + +**Metrics**: +- Story Points: 5 +- Business Value: 8 +- Priority: 2 + +**Acceptance Criteria**: +- [ ] All required acceptance checks are explicit and testable + +**Body**: +```markdown +## As a +registered user + +## I want +the login flow to handle token expiry and retries safely + +## So that +I can complete authentication without ambiguity or data loss + +## Acceptance Criteria +- [ ] Expired access token triggers refresh workflow +- [ ] Failed refresh prompts re-authentication with clear guidance +``` +```` + +**Comment context in export**: + +- Export includes item comments when adapter supports comment retrieval (GitHub + ADO). +- Export always includes full comment history (no truncation). +- Use `--first-comments N` or `--last-comments N` only to adjust preview output density. +- For refined import readiness, the `-refined.md` artifact should omit the instruction header and keep only item sections. + +### Definition of Ready (DoR) + +- `--check-dor` - Check Definition of Ready (DoR) rules before refinement (loads from `.specfact/dor.yaml`) + +### OpenSpec Integration + +- `--bundle BUNDLE` or `-b BUNDLE` - OpenSpec bundle path to import refined items +- `--auto-bundle` - Auto-import refined items to OpenSpec bundle +- `--openspec-comment` - Add OpenSpec change proposal reference as comment (preserves original body) + +### Generic Search + +- `--search QUERY` or `-s QUERY` - Search query using provider-specific syntax (e.g., GitHub: "is:open label:feature") + +## Workflow + +### Step 1: Execute CLI Command + +Execute the SpecFact CLI command with user-provided arguments: + +```bash +specfact backlog refine $ADAPTER \ + [--labels LABELS] [--state STATE] [--assignee USERNAME] \ + [--iteration PATH] [--sprint SPRINT] [--release RELEASE] \ + [--limit N] \ + [--persona PERSONA] [--framework FRAMEWORK] \ + [--template TEMPLATE_ID] [--auto-accept-high-confidence] \ + [--preview] [--write] \ + [--bundle BUNDLE] [--auto-bundle] \ + [--search QUERY] +``` + +**Capture CLI output**: + +- List of backlog items found +- Template detection results for each item +- Refinement prompts for IDE AI copilot +- Validation results +- Preview of what will be written (if --preview) +- Writeback confirmation (if --write) + +### Step 2: Process Refinement Prompts (If Items Need Refinement) + +**When CLI generates refinement prompts**: + +1. **For each item needing refinement**: + - CLI displays a refinement prompt + - Copy the prompt and execute it in your IDE AI copilot + - Get refined content from AI copilot response + - Paste refined content back to CLI when prompted + +2. **CLI validation**: + - CLI validates refined content against template requirements + - CLI provides confidence score + - CLI shows preview of changes (original vs refined) + +3. **User confirmation**: + - Review preview (fields that will be updated vs preserved) + - Accept or reject refinement + - If accepted and --write flag set, CLI updates remote backlog + +4. **Session control**: + - Use `:skip` to skip the current item without updating + - Use `:quit` or `:abort` to cancel the entire session gracefully + - Session cancellation shows summary and exits without error + +### Interactive refinement (Copilot mode) + +When refining backlog items in Copilot mode (e.g. after export to tmp or during a refinement session), follow this **per-story loop** so the PO and stakeholders can review and approve before any update: + +1. **For each story** (one at a time): + - **Present** the refined story in a clear, readable format: + - Use headings for Title, Body, Acceptance Criteria, Metrics. + - Use tables or panels for structured data so it is easy to scan. + - **Assess specification level** so the DevOps team knows if the story is ready, under-specified, or over-specified: + - **Under-specified**: Missing acceptance criteria, vague scope, unclear "so that" or user value. List evidence (e.g. "No AC", "Scope could mean X or Y"). Suggest what to add. + - **Over-specified**: Too much implementation detail, too many sub-steps for one story, or solution prescribed instead of outcome. List evidence and suggest what to trim or split. + - **Fit for scope and intent**: Clear persona, capability, benefit, and testable AC; appropriate size. State briefly why it is ready (and, if DoR is in use, that DoR is satisfied). + - **List ambiguities** or open questions (e.g. unclear scope, missing acceptance criteria, conflicting assumptions). + - **Ask** the PO and other stakeholders for clarification: "Please review the refined story above. Do you want any changes? Any ambiguities to resolve? Should this story be split?" + - **If the user provides feedback**: Re-refine the story incorporating the feedback, then repeat from "Present" for this story. + - **Only when the user explicitly approves** (e.g. "looks good", "approved", "no changes"): Mark this story as done and move to the **next** story. + - **Do not update** the backlog item (or write to the refined file as final) until the user has approved this story. + +2. **Formatting**: + - Use clear headings, bullet lists, and optional tables/panels so refinement sessions are easy to follow and enjoyable. + - Keep each story’s block self-contained so stakeholders can focus on one item at a time. + +3. **Rule**: The backlog item (or exported block) must only be updated/finalized **after** the user has approved the refined content for that story. Then proceed to the next story with the same process. + +### Step 3: Present Results + +Display refinement results: + +- Number of items refined +- Number of items skipped +- Template matches found +- Confidence scores +- Preview status (if --preview) +- Writeback status (if --write) + +## CLI Enforcement + +**CRITICAL**: Always use SpecFact CLI commands. See [CLI Enforcement Rules](./shared/cli-enforcement.md) for details. + +**Rules**: + +- Execute CLI first - never modify backlog items directly +- Use refinement prompts generated by CLI +- Validate refined content through CLI +- Use --preview flag by default for safety +- Use --write flag only when ready to update backlog + +## Field Preservation Policy + +**Fields that will be UPDATED**: + +- `title`: Updated if changed during refinement +- `body_markdown`: Updated with refined content +- `acceptance_criteria`: Updated if extracted/refined (provider-specific mapping) +- `story_points`: Updated if extracted/refined (provider-specific mapping) +- `business_value`: Updated if extracted/refined (provider-specific mapping) +- `priority`: Updated if extracted/refined (provider-specific mapping) +- `value_points`: Updated if calculated (SAFe: business_value / story_points) +- `work_item_type`: Updated if extracted/refined (provider-specific mapping) + +**Fields that will be PRESERVED** (not modified): + +- `assignees`: Preserved +- `tags`: Preserved +- `state`: Preserved (original state maintained) +- `sprint`: Preserved (if present) +- `release`: Preserved (if present) +- `iteration`: Preserved (if present) +- `area`: Preserved (if present) +- `source_state`: Preserved for cross-adapter state mapping (stored in bundle entries) +- All other metadata: Preserved in provider_fields + +**Provider-Specific Field Mapping**: + +- **GitHub**: Fields are extracted from markdown body (headings, labels, etc.) and mapped to canonical fields +- **ADO**: Fields are extracted from separate ADO fields (System.Description, System.AcceptanceCriteria, Microsoft.VSTS.Common.StoryPoints, etc.) and mapped to canonical fields +- **Custom Mapping**: ADO supports custom field mapping via `.specfact/templates/backlog/field_mappings/ado_custom.yaml` or `SPECFACT_ADO_CUSTOM_MAPPING` environment variable + +**Cross-Adapter State Preservation**: + +- When items are imported into bundles, the original `source_state` (e.g., "open", "closed", "New", "Active") is stored in `source_metadata["source_state"]` +- During cross-adapter export (e.g., GitHub → ADO), the `source_state` is used to determine the correct target state +- Generic state mapping ensures state is correctly translated between any adapter pair using OpenSpec as intermediate format +- This ensures closed GitHub issues sync to ADO as "Closed", and open GitHub issues sync to ADO as "New" + +**OpenSpec Comment Integration**: + +- When `--openspec-comment` is used, a structured comment is added to the backlog item +- The comment includes: Change ID, template used, confidence score, refinement timestamp +- Original body is preserved; comment provides OpenSpec reference for cross-sync + +**Cross-Adapter State Mapping**: + +- When refining items that will be synced across adapters (e.g., GitHub ↔ ADO), state is preserved using generic mapping +- Generic state mapping uses OpenSpec as intermediate format: + - Source adapter state → OpenSpec status → Target adapter state + - Example: GitHub "open" → OpenSpec "proposed" → ADO "New" + - Example: GitHub "closed" → OpenSpec "applied" → ADO "Closed" +- State preservation: Original `source_state` is stored in bundle entries and used during cross-adapter export +- Bidirectional mapping: Works in both directions (GitHub → ADO and ADO → GitHub) +- State mapping is automatic during `sync bridge` operations when `source_state` and `source_type` are present + +## Architecture Note + +SpecFact CLI follows a CLI-first architecture: + +- SpecFact CLI generates prompts/instructions for IDE AI copilots +- IDE AI copilots execute those instructions using their native LLM +- IDE AI copilots feed results back to SpecFact CLI +- SpecFact CLI validates and processes the results +- SpecFact CLI does NOT directly invoke LLM APIs + +## Expected Output + +### Success (Preview Mode) + +```text +✓ Refinement completed (Preview Mode) + +Found 5 backlog items +Limited to 3 items (found 5 total) +Refined: 3 +Skipped: 0 + +Preview mode: Refinement will NOT be written to backlog +Use --write flag to explicitly opt-in to writeback +``` + +### Success (Cancelled Session) + +```text +Session cancelled by user + +Found 5 backlog items +Refined: 1 +Skipped: 1 +``` + +### Success (Write Mode) + +```text +✓ Refinement completed and written to backlog + +Found 5 backlog items +Refined: 3 +Skipped: 2 + +Items updated in remote backlog: + - #123: User Story Template Applied + - #124: Defect Template Applied + - #125: Spike Template Applied +``` + +## Common Patterns + +```bash +# Refine GitHub issues with feature label (requires repo-owner and repo-name) +/specfact.backlog-refine --adapter github --repo-owner nold-ai --repo-name specfact-cli --labels feature + +# Refine ADO work items (Azure DevOps Services - cloud) with full iteration path +/specfact.backlog-refine --adapter ado --ado-org my-org --ado-project my-project --sprint "MyProject\\Sprint 1" + +# Refine ADO work items using current active iteration (sprint omitted) +/specfact.backlog-refine --adapter ado --ado-org my-org --ado-project my-project --ado-team "My Team" --state Active + +# Refine ADO work items (Azure DevOps Server - on-premise, collection in base_url) +/specfact.backlog-refine --adapter ado --ado-base-url "https://devops.company.com/tfs/DefaultCollection" --ado-project my-project --state Active + +# Refine ADO work items (Azure DevOps Server - on-premise, collection provided) +/specfact.backlog-refine --adapter ado --ado-base-url "https://devops.company.com" --ado-org "DefaultCollection" --ado-project my-project --state Active + +# Refine with batch limit (process max 10 items) +/specfact.backlog-refine --adapter github --repo-owner nold-ai --repo-name specfact-cli --limit 10 --labels feature + +# Refine with case-insensitive filters +/specfact.backlog-refine --adapter ado --ado-org my-org --ado-project my-project --state "new" --assignee "jane doe" + +# Refine with Scrum framework and Product Owner persona +/specfact.backlog-refine --adapter github --repo-owner nold-ai --repo-name specfact-cli --framework scrum --persona product-owner + +# Preview refinement without writing +/specfact.backlog-refine --adapter github --repo-owner nold-ai --repo-name specfact-cli --preview + +# Write refinement to backlog with OpenSpec comment (explicit opt-in) +/specfact.backlog-refine --adapter github --repo-owner nold-ai --repo-name specfact-cli --write --openspec-comment + +# Check Definition of Ready before refinement +/specfact.backlog-refine --adapter github --repo-owner nold-ai --repo-name specfact-cli --check-dor --labels feature + +# Refine and import to OpenSpec bundle +/specfact.backlog-refine --adapter github --repo-owner nold-ai --repo-name specfact-cli --bundle my-project --auto-bundle --state open + +# Cross-adapter sync workflow: Refine GitHub → Sync to ADO (with state preservation) +/specfact.backlog-refine --adapter github --repo-owner nold-ai --repo-name specfact-cli --write --labels feature +# Then sync to ADO (state will be automatically mapped: open → New, closed → Closed) +# specfact sync bridge --adapter ado --ado-org my-org --ado-project my-project --mode bidirectional + +# Cross-adapter sync workflow: Refine ADO → Sync to GitHub (with state preservation) +/specfact.backlog-refine --adapter ado --ado-org my-org --ado-project my-project --write --state Active +# Then sync to GitHub (state will be automatically mapped: New → open, Closed → closed) +# specfact sync bridge --adapter github --repo-owner my-org --repo-name my-repo --mode bidirectional +``` + +## Troubleshooting + +### ADO API Errors + +**Error: "No HTTP resource was found that matches the request URI"** + +- **Cause**: Missing `api-version` parameter or incorrect URL format +- **Solution**: Ensure `api-version=7.1` is included in all ADO API URLs. Check base URL format for on-premise installations. + +**Error: "The requested resource does not support http method 'GET'"** + +- **Cause**: Attempting to use GET on WIQL endpoint (which requires POST) +- **Solution**: WIQL queries must use POST method with JSON body containing the query. This is handled automatically by SpecFact CLI. + +**Error: Organization removed from request string** + +- **Cause**: Incorrect base URL format (may already include organization/collection) +- **Solution**: For on-premise, check if base URL already includes collection. If yes, omit `--ado-org` or adjust base URL accordingly. + +**Error: "Azure DevOps API token required"** + +- **Cause**: Missing authentication token +- **Solution**: Provide token via `--ado-token`, `AZURE_DEVOPS_TOKEN` environment variable, or use `specfact auth azure-devops` for device code flow. + +## Context + +{ARGS} diff --git a/packages/specfact-backlog/resources/prompts/specfact.sync-backlog.md b/packages/specfact-backlog/resources/prompts/specfact.sync-backlog.md new file mode 100644 index 0000000..83dc155 --- /dev/null +++ b/packages/specfact-backlog/resources/prompts/specfact.sync-backlog.md @@ -0,0 +1,557 @@ +# SpecFact Sync Backlog Command + +## User Input + +```text +$ARGUMENTS +``` + +You **MUST** consider the user input before proceeding (if not empty). + +## Purpose + +Sync OpenSpec change proposals to DevOps backlog tools (GitHub Issues, ADO, Linear, Jira) with AI-assisted content sanitization. Supports export-only sync from OpenSpec change proposals to DevOps issues. + +**When to use:** Creating backlog issues from OpenSpec change proposals, syncing change status to DevOps tools, managing public vs internal issue content. + +**Quick:** `/specfact.sync-backlog --adapter github` or `/specfact.sync-backlog --sanitize --target-repo owner/repo` + +## Parameters + +### Target/Input + +- `--repo PATH` - Path to OpenSpec repository containing change proposals. Default: current directory (.) +- `--code-repo PATH` - Path to source code repository for code change detection (default: same as `--repo`). **Required when OpenSpec repository differs from source code repository.** For example, if OpenSpec proposals are in `specfact-cli-internal` but source code is in `specfact-cli`, use `--repo /path/to/specfact-cli-internal --code-repo /path/to/specfact-cli`. +- `--target-repo OWNER/REPO` - Target repository for issue creation (format: owner/repo). Default: same as code repository + +### Behavior/Options + +- `--sanitize/--no-sanitize` - Sanitize proposal content for public issues (default: auto-detect based on repo setup) + - Auto-detection: If code repo != planning repo → sanitize, if same repo → no sanitization + - `--sanitize`: Force sanitization (removes competitive analysis, internal strategy, implementation details) + - `--no-sanitize`: Skip sanitization (use full proposal content) + - **Proposal Filtering**: The sanitization flag also controls which proposals are synced: + - **Public repos** (`--sanitize`): Only syncs proposals with status `"applied"` (archived/completed changes) + - **Internal repos** (`--no-sanitize`): Syncs all active proposals (`"proposed"`, `"in-progress"`, `"applied"`, `"deprecated"`, `"discarded"`) + - Filtering prevents premature exposure of work-in-progress proposals to public repositories +- `--interactive` - Interactive mode for AI-assisted sanitization (requires slash command) + - Enables interactive change selection + - Enables per-change sanitization selection + - Enables LLM review workflow for sanitized proposals +- `--change-ids IDS` - Comma-separated list of change proposal IDs to export (default: all active proposals) + - Example: `--change-ids add-devops-backlog-tracking,add-change-tracking-datamodel` + - Only used in non-interactive mode (interactive mode prompts for selection) +- `--export-to-tmp` - Export proposal content to temporary file for LLM review (sanitization workflow) + - Creates `/tmp/specfact-proposal-<change-id>.md` for each proposal + - Used internally by slash command for sanitization review +- `--import-from-tmp` - Import sanitized content from temporary file (sanitization workflow) + - Reads `/tmp/specfact-proposal-<change-id>-sanitized.md` for each proposal + - Used internally by slash command after LLM review +- `--tmp-file PATH` - Specify temporary file path (used with --export-to-tmp or --import-from-tmp) + - Default: `/tmp/specfact-proposal-<change-id>.md` or `/tmp/specfact-proposal-<change-id>-sanitized.md` + +**Exact tmp structure contract (`sync bridge --import-from-tmp`)**: + +- Preserve proposal heading and section headers exactly: + - `# Change: <title>` + - `## Why` + - `## What Changes` +- Keep a blank line after each section header (`## Why` and `## What Changes`) before content. +- Do not rename, remove, or reorder these headers. +- Keep sanitized content inside `## Why` and `## What Changes` sections only. +- Do not add extra top-level sections before, between, or after these sections. +- If headers are missing or renamed, parser extraction for rationale/description will be incomplete. + +Exact sanitized tmp example: + +```markdown +# Change: Improve backlog refinement mapping + +## Why + +Short rationale text. + +## What Changes + +Sanitized proposal description text. +``` + +### Code Change Tracking (Advanced) + +- `--track-code-changes/--no-track-code-changes` - Detect code changes (git commits, file modifications) and add progress comments to existing issues (default: False) + - **Repository Selection**: Uses `--code-repo` if provided, otherwise uses `--repo` for code change detection + - **Git Commit Detection**: Searches git log for commits mentioning the change proposal ID (e.g., `add-code-change-tracking`) + - **File Change Tracking**: Extracts files modified in detected commits + - **Progress Comment Generation**: Formats comment with commit details and file changes + - **Duplicate Prevention**: Checks against existing comments to avoid duplicates + - **Source Tracking Update**: Updates `proposal.md` with progress metadata +- `--add-progress-comment/--no-add-progress-comment` - Add manual progress comment to existing issues without code change detection (default: False) +- `--update-existing/--no-update-existing` - Update existing issue bodies when proposal content changes (default: False for safety). Uses content hash to detect changes. + +### Advanced/Configuration + +- `--adapter TYPE` - DevOps adapter type (github, ado, linear, jira). Default: github + +**GitHub Adapter Options:** + +- `--repo-owner OWNER` - Repository owner (for GitHub adapter). Optional, can use bridge config +- `--repo-name NAME` - Repository name (for GitHub adapter). Optional, can use bridge config +- `--github-token TOKEN` - GitHub API token (optional, uses GITHUB_TOKEN env var or gh CLI if not provided) +- `--use-gh-cli/--no-gh-cli` - Use GitHub CLI (`gh auth token`) to get token automatically (default: True). Useful in enterprise environments where PAT creation is restricted + +**Azure DevOps Adapter Options:** + +- `--ado-org ORG` - Azure DevOps organization (required for ADO adapter) +- `--ado-project PROJECT` - Azure DevOps project (required for ADO adapter) +- `--ado-base-url URL` - Azure DevOps base URL (optional, defaults to <https://dev.azure.com>). Use for Azure DevOps Server (on-prem) +- `--ado-token TOKEN` - Azure DevOps PAT (optional, uses AZURE_DEVOPS_TOKEN env var if not provided). Requires Work Items (Read & Write) permissions +- `--ado-work-item-type TYPE` - Azure DevOps work item type (optional, derived from process template if not provided). Examples: 'User Story', 'Product Backlog Item', 'Bug' + +## Workflow + +### Step 1: Parse Arguments + +- Extract repository path (default: current directory) +- Extract adapter type (default: github) +- Extract sanitization preference (default: auto-detect) +- Extract target repository (default: same as code repo) + +### Step 2: Interactive Change Selection (Slash Command Only) + +**When using slash command** (`/specfact.sync-backlog`), provide interactive selection: + +1. **List available change proposals**: + - Read OpenSpec change proposals from `openspec/changes/` (including archived proposals) + - Display list with: change ID, title, status, existing issue (if any) + - Format: `[1] add-devops-backlog-tracking (applied) - Issue #17` + - Format: `[2] add-change-tracking-datamodel (proposed) - No issue` + - **Note**: When `--sanitize` is used, only proposals with status `"applied"` will be synced to public repos + +2. **User selection**: + - Prompt: "Select changes to export (comma-separated numbers, 'all', or 'none'):" + - Parse selection (e.g., "1,3" or "all") + - Validate selection against available proposals + +3. **Per-change sanitization selection**: + - For each selected change, prompt: "Sanitize '[change-title]'? (y/n/auto):" + - `y`: Force sanitization + - `n`: Skip sanitization + - `auto`: Use auto-detection (code repo != planning repo) + - Store selection: `{change_id: sanitize_choice}` + +**When using CLI directly** (non-interactive): + +- **Public repos** (`--sanitize`): Only exports proposals with status `"applied"` (archived/completed) +- **Internal repos** (`--no-sanitize`): Exports all active proposals regardless of status +- Use `--sanitize/--no-sanitize` flag to control filtering behavior +- No per-change selection + +### Step 3: Execute CLI (Initial Pass) + +**For non-sanitized proposals** (direct export): + +```bash +# GitHub adapter +specfact sync bridge --adapter github --mode export-only --repo <openspec-path> \ + --no-sanitize --change-ids <id1,id2> \ + [--code-repo <source-code-path>] \ + [--track-code-changes] [--add-progress-comment] \ + [--target-repo <owner/repo>] [--repo-owner <owner>] [--repo-name <name>] \ + [--github-token <token>] [--use-gh-cli] + +# Azure DevOps adapter +specfact sync bridge --adapter ado --mode export-only --repo <openspec-path> \ + --no-sanitize --change-ids <id1,id2> \ + [--code-repo <source-code-path>] \ + [--track-code-changes] [--add-progress-comment] \ + --ado-org <org> --ado-project <project> \ + [--ado-token <token>] [--ado-base-url <url>] [--ado-work-item-type <type>] +``` + +**For sanitized proposals** (requires LLM review): + +```bash +# Step 3a: Export to temporary file for LLM review (GitHub) +specfact sync bridge --adapter github --mode export-only --repo <openspec-path> \ + --sanitize --change-ids <id1,id2> \ + [--code-repo <source-code-path>] \ + --export-to-tmp --tmp-file /tmp/specfact-proposal-<change-id>.md \ + [--target-repo <owner/repo>] [--repo-owner <owner>] [--repo-name <name>] \ + [--github-token <token>] [--use-gh-cli] + +# Step 3a: Export to temporary file for LLM review (ADO) +specfact sync bridge --adapter ado --mode export-only --repo <openspec-path> \ + --sanitize --change-ids <id1,id2> \ + [--code-repo <source-code-path>] \ + --export-to-tmp --tmp-file /tmp/specfact-proposal-<change-id>.md \ + --ado-org <org> --ado-project <project> \ + [--ado-token <token>] [--ado-base-url <url>] +``` + +**Note**: When `--code-repo` is provided, code change detection uses that repository. Otherwise, code changes are detected in the OpenSpec repository (`--repo`). + +### Step 4: LLM Sanitization Review (Slash Command Only, For Sanitized Proposals) + +**Only execute if sanitization is required**: + +1. **Read temporary file**: + - Read `/tmp/specfact-proposal-<change-id>.md` for each sanitized proposal + - Display original content to user + +2. **LLM sanitization**: + - Review proposal content for: + - Competitive analysis sections (remove) + - Market positioning statements (remove) + - Implementation details (file paths, code structure - remove or generalize) + - Effort estimates and timelines (remove) + - Internal strategy sections (remove) + - Preserve: + - User-facing value propositions + - High-level feature descriptions + - Acceptance criteria (user-facing) + - External documentation links + +3. **Generate sanitized content**: + - Create sanitized version with removed sections/patterns + - Write to `/tmp/specfact-proposal-<change-id>-sanitized.md` + - Display diff (original vs sanitized) for user review + +4. **User approval**: + - Prompt: "Approve sanitized content? (y/n/edit):" + - `y`: Proceed to Step 5 + - `n`: Skip this proposal + - `edit`: Allow user to manually edit sanitized file, then proceed + +### Step 5: Execute CLI (Final Export) + +**For sanitized proposals** (after LLM review): + +```bash +# Step 5a: Import sanitized content from temporary file (GitHub) +specfact sync bridge --adapter github --mode export-only --repo <path> \ + --import-from-tmp --tmp-file /tmp/specfact-proposal-<change-id>-sanitized.md \ + --change-ids <id1,id2> \ + [--target-repo <owner/repo>] [--repo-owner <owner>] [--repo-name <name>] \ + [--github-token <token>] [--use-gh-cli] + +# Step 5a: Import sanitized content from temporary file (ADO) +specfact sync bridge --adapter ado --mode export-only --repo <path> \ + --import-from-tmp --tmp-file /tmp/specfact-proposal-<change-id>-sanitized.md \ + --change-ids <id1,id2> \ + --ado-org <org> --ado-project <project> \ + [--ado-token <token>] [--ado-base-url <url>] +``` + +**For non-sanitized proposals** (already exported in Step 3): + +- No additional CLI call needed + +### Step 6: Present Results + +- Display sync results (issues created/updated) +- Show issue URLs and numbers +- Indicate sanitization status (if applied) +- List which proposals were sanitized vs exported directly +- **Show code change tracking results** (if `--track-code-changes` was enabled): + - Number of commits detected + - Number of progress comments added + - Repository used for code change detection (`--code-repo` or `--repo`) +- **Show filtering warnings** (if proposals were filtered out due to status) + - Example: `⚠ Filtered out 2 proposal(s) with non-applied status (public repos only sync archived/completed proposals)` +- Present any warnings or errors + +## CLI Enforcement + +**CRITICAL**: Always use SpecFact CLI commands. See [CLI Enforcement Rules](./shared/cli-enforcement.md) for details. + +**Rules:** + +- Execute CLI first - never create artifacts directly +- Use `--no-interactive` flag in CI/CD environments +- Never modify `.specfact/` or `openspec/` directly +- Use CLI output as grounding for validation +- Code generation requires LLM (only via AI IDE slash prompts, not CLI-only) + +## Dual-Stack Workflow (Copilot Mode) + +When in copilot mode, follow this workflow: + +### Phase 1: Interactive Selection (Slash Command Only) + +**Purpose**: Allow user to select which changes to export and sanitization preferences + +**What to do**: + +1. **List available proposals**: + - Read `openspec/changes/` directory (including `archive/` subdirectory) + - Parse `proposal.md` files to extract: change_id, title, status + - Check for existing issues via `source_tracking` section + - Display numbered list to user + - **Note**: When `--sanitize` is used, only proposals with status `"applied"` will be available for public repos + +2. **User selection**: + - Prompt for change selection (comma-separated numbers, 'all', 'none') + - For each selected change, prompt for sanitization preference (y/n/auto) + - Store selections: `{change_id: {selected: bool, sanitize: bool|None}}` + +**Output**: Dictionary mapping change IDs to selection and sanitization preferences + +### Phase 2: CLI Export to Temporary Files (For Sanitized Proposals Only) + +**Purpose**: Export proposal content to temporary files for LLM review + +**When**: Only for proposals where `sanitize=True` + +**What to do**: + +```bash +# For each sanitized proposal, export to temp file (GitHub) +specfact sync bridge --adapter github --mode export-only --repo <openspec-path> \ + --change-ids <change-id> --export-to-tmp --tmp-file /tmp/specfact-proposal-<change-id>.md \ + [--code-repo <source-code-path>] \ + [--repo-owner <owner>] [--repo-name <name>] [--github-token <token>] [--use-gh-cli] + +# For each sanitized proposal, export to temp file (ADO) +specfact sync bridge --adapter ado --mode export-only --repo <openspec-path> \ + --change-ids <change-id> --export-to-tmp --tmp-file /tmp/specfact-proposal-<change-id>.md \ + [--code-repo <source-code-path>] \ + --ado-org <org> --ado-project <project> [--ado-token <token>] [--ado-base-url <url>] +``` + +**Capture**: + +- Temporary file paths for each proposal +- Original proposal content (for comparison) + +**What NOT to do**: + +- ❌ Create GitHub issues directly (wait for sanitization review) +- ❌ Skip LLM review for sanitized proposals + +### Phase 3: LLM Sanitization Review (For Sanitized Proposals Only) + +**Purpose**: Review and sanitize proposal content before creating public issues + +**When**: Only for proposals where `sanitize=True` + +**What to do**: + +1. **Read temporary file**: + - Read `/tmp/specfact-proposal-<change-id>.md` for each sanitized proposal + - Display original content to user + +2. **LLM sanitization**: + - Review proposal content section by section + - Remove: + - Competitive analysis sections (`## Competitive Analysis`) + - Market positioning statements (`## Market Positioning`) + - Implementation details (file paths like `src/specfact_cli/...`, code structure) + - Effort estimates and timelines + - Internal strategy sections + - Preserve: + - User-facing value propositions + - High-level feature descriptions (without file paths) + - Acceptance criteria (user-facing) + - External documentation links + +3. **Generate sanitized content**: + - Create sanitized version with removed sections/patterns + - Write to `/tmp/specfact-proposal-<change-id>-sanitized.md` + - Display diff (original vs sanitized) for user review + +4. **User approval**: + - Prompt: "Approve sanitized content for '[change-title]'? (y/n/edit):" + - `y`: Proceed to Phase 4 + - `n`: Skip this proposal (don't create issue) + - `edit`: Allow user to manually edit sanitized file, then proceed + +**Output**: Sanitized content files in `/tmp/specfact-proposal-<change-id>-sanitized.md` + +**What NOT to do**: + +- ❌ Create GitHub issues directly (use CLI in Phase 4) +- ❌ Modify original proposal files +- ❌ Skip user approval step + +### Phase 4: CLI Direct Export (For Non-Sanitized Proposals) + +**Purpose**: Export proposals that don't require sanitization + +**When**: For proposals where `sanitize=False` + +**What to do**: + +```bash +# Export non-sanitized proposals directly (GitHub) +specfact sync bridge --adapter github --mode export-only --repo <openspec-path> \ + --change-ids <id1,id2> --no-sanitize \ + [--code-repo <source-code-path>] \ + [--track-code-changes] [--add-progress-comment] \ + [--repo-owner <owner>] [--repo-name <name>] [--github-token <token>] [--use-gh-cli] + +# Export non-sanitized proposals directly (ADO) +specfact sync bridge --adapter ado --mode export-only --repo <openspec-path> \ + --change-ids <id1,id2> --no-sanitize \ + [--code-repo <source-code-path>] \ + [--track-code-changes] [--add-progress-comment] \ + --ado-org <org> --ado-project <project> [--ado-token <token>] [--ado-base-url <url>] +``` + +**Result**: Issues created directly without LLM review + +### Phase 5: CLI Import Sanitized Content (For Sanitized Proposals Only) + +**Purpose**: Create GitHub issues from LLM-reviewed sanitized content + +**When**: Only for proposals where `sanitize=True` and user approved + +**What to do**: + +```bash +# For each approved sanitized proposal, import from temp file and create issue (GitHub) +specfact sync bridge --adapter github --mode export-only --repo <openspec-path> \ + --change-ids <change-id> --import-from-tmp --tmp-file /tmp/specfact-proposal-<change-id>-sanitized.md \ + [--code-repo <source-code-path>] \ + [--track-code-changes] [--add-progress-comment] \ + [--repo-owner <owner>] [--repo-name <name>] [--github-token <token>] [--use-gh-cli] + +# For each approved sanitized proposal, import from temp file and create work item (ADO) +specfact sync bridge --adapter ado --mode export-only --repo <openspec-path> \ + --change-ids <change-id> --import-from-tmp --tmp-file /tmp/specfact-proposal-<change-id>-sanitized.md \ + [--code-repo <source-code-path>] \ + [--track-code-changes] [--add-progress-comment] \ + --ado-org <org> --ado-project <project> [--ado-token <token>] [--ado-base-url <url>] +``` + +**Result**: Issues created with sanitized content + +**What NOT to do**: + +- ❌ Create GitHub issues directly via API (use CLI command) +- ❌ Skip CLI validation +- ❌ Modify `.specfact/` or `openspec/` folders directly + +### Phase 6: Cleanup and Results + +**Purpose**: Clean up temporary files and present results + +**What to do**: + +1. **Cleanup**: + - Remove temporary files: `/tmp/specfact-proposal-*.md` + - Remove sanitized files: `/tmp/specfact-proposal-*-sanitized.md` + +2. **Present results**: + - Display sync results (issues created/updated) + - Show issue URLs and numbers + - Indicate which proposals were sanitized vs exported directly + - **Show code change tracking results** (if `--track-code-changes` was enabled): + - Number of commits detected per proposal + - Number of progress comments added per issue + - Repository used for code change detection (`--code-repo` or `--repo`) + - Example: `✓ Detected 3 commits for 'add-feature-x', added 1 progress comment to issue #123` + - **Show filtering warnings** (if proposals were filtered out): + - Public repos: `⚠ Filtered out N proposal(s) with non-applied status (public repos only sync archived/completed proposals)` + - Internal repos: `⚠ Filtered out N proposal(s) without source tracking entry and inactive status` + - Present any warnings or errors + +**Note**: If code generation is needed, use the validation loop pattern (see [CLI Enforcement Rules](./shared/cli-enforcement.md#standard-validation-loop-pattern-for-llm-generated-code)) + +## Expected Output + +### Success + +```text +✓ Successfully synced 3 change proposals + +Adapter: github +Repository: nold-ai/specfact-cli-internal +Code Repository: nold-ai/specfact-cli (separate repo) + +Issues Created: + - #14: Add DevOps Backlog Tracking Integration + - #15: Add Change Tracking Data Model + - #16: Implement OpenSpec Bridge Adapter + +Sanitization: Applied (different repos detected) +Issue IDs saved to OpenSpec proposal files +``` + +### Success (With Code Change Tracking) + +```text +✓ Successfully synced 3 change proposals + +Adapter: github +Repository: nold-ai/specfact-cli-internal +Code Repository: nold-ai/specfact-cli (separate repo) + +Issues Created: + - #14: Add DevOps Backlog Tracking Integration + - #15: Add Change Tracking Data Model + - #16: Implement OpenSpec Bridge Adapter + +Code Change Tracking: + - Detected 5 commits for 'add-devops-backlog-tracking' + - Added 1 progress comment to issue #14 + - Detected 3 commits for 'add-change-tracking-datamodel' + - Added 1 progress comment to issue #15 + - No new commits detected for 'implement-openspec-bridge-adapter' + +Sanitization: Applied (different repos detected) +Issue IDs saved to OpenSpec proposal files +``` + +### Error (Missing Token) + +```text +✗ Sync failed: Missing GitHub API token +Provide token via --github-token, GITHUB_TOKEN env var, or --use-gh-cli +``` + +### Warning (Sanitization Applied) + +```text +⚠ Content sanitization applied (code repo != planning repo) +Competitive analysis and internal strategy sections removed +``` + +### Warning (Proposals Filtered - Public Repo) + +```text +✓ Successfully synced 1 change proposals +⚠ Filtered out 2 proposal(s) with non-applied status (public repos only sync archived/completed proposals, regardless of source tracking). Only 1 applied proposal(s) will be synced. +``` + +### Warning (Proposals Filtered - Internal Repo) + +```text +✓ Successfully synced 3 change proposals +⚠ Filtered out 1 proposal(s) without source tracking entry for target repo and inactive status. Only 3 proposal(s) will be synced. +``` + +## Common Patterns + +```bash +# Public repo: only syncs "applied" proposals (archived changes) +/specfact.sync-backlog --adapter github --sanitize --target-repo nold-ai/specfact-cli + +# Internal repo: syncs all active proposals (proposed, in-progress, applied, etc.) +/specfact.sync-backlog --adapter github --no-sanitize --target-repo nold-ai/specfact-cli-internal + +# Auto-detect sanitization (filters based on repo setup) +/specfact.sync-backlog --adapter github + +# Explicit repository configuration (GitHub) +/specfact.sync-backlog --adapter github --repo-owner nold-ai --repo-name specfact-cli-internal + +# Azure DevOps adapter (requires org and project) +/specfact.sync-backlog --adapter ado --ado-org my-org --ado-project my-project + +# Use GitHub CLI for token (enterprise-friendly) +/specfact.sync-backlog --adapter github --use-gh-cli +``` + +## Context + +{ARGS} diff --git a/tests/unit/docs/test_docs_review.py b/tests/unit/docs/test_docs_review.py index d8c9675..cf14807 100644 --- a/tests/unit/docs/test_docs_review.py +++ b/tests/unit/docs/test_docs_review.py @@ -89,6 +89,36 @@ def _normalize_route(route: str) -> str: return cleaned +def _list_front_matter_redirect_from_routes(text: str) -> list[str]: + """Return normalized redirect_from routes declared in YAML front matter only.""" + lines = text.splitlines() + if not lines or lines[0].strip() != "---": + return [] + + end_index = None + for index in range(1, len(lines)): + if lines[index].strip() == "---": + end_index = index + break + if end_index is None: + return [] + + routes: list[str] = [] + in_redirect_block = False + for line in lines[1:end_index]: + if line.strip() == "redirect_from:": + in_redirect_block = True + continue + if in_redirect_block: + stripped = line.strip() + if stripped.startswith("- "): + route = stripped[2:].split("#", 1)[0].strip().strip('"').strip("'") + routes.append(_normalize_route(route)) + elif stripped and not stripped.startswith("-") and not stripped.startswith("#"): + in_redirect_block = False + return routes + + def _published_route_for_path(path: Path, metadata: dict[str, str]) -> str: permalink = metadata.get("permalink") if permalink: @@ -371,31 +401,6 @@ def test_config_links_to_core_docs_site() -> None: # --------------------------------------------------------------------------- -def _list_redirect_from_routes(text: str) -> list[str]: - """Return normalized routes declared under ``redirect_from:`` in front matter.""" - routes: list[str] = [] - lines = text.splitlines() - i = 0 - while i < len(lines): - if lines[i].strip() == "redirect_from:": - i += 1 - while i < len(lines): - stripped = lines[i].strip() - if stripped.startswith("- "): - val = stripped[2:].strip().strip('"').strip("'") - routes.append(_normalize_route(val)) - i += 1 - elif not stripped or stripped.startswith("#"): - i += 1 - elif stripped == "---": - break - else: - break - break - i += 1 - return routes - - def _guides_legacy_redirect_violation(path: Path, text: str) -> str | None: """If ``docs/guides/<stem>.md`` publishes outside ``/guides/``, require ``redirect_from`` for ``/guides/<stem>/``. @@ -416,7 +421,7 @@ def _guides_legacy_redirect_violation(path: Path, text: str) -> str | None: return None expected = _normalize_route(f"/guides/{path.stem}/") - redirects = _list_redirect_from_routes(text) + redirects = _list_front_matter_redirect_from_routes(text) if expected in redirects: return None return ( @@ -447,21 +452,45 @@ def _extract_redirect_from_entries() -> dict[str, Path]: text = _read_text(path) if "redirect_from:" not in text: continue - in_redirect_block = False - for line in text.splitlines(): - if line.strip() == "redirect_from:": - in_redirect_block = True - continue - if in_redirect_block: - stripped = line.strip() - if stripped.startswith("- "): - route = stripped[2:].strip().strip('"').strip("'") - redirects[_normalize_route(route)] = path - elif stripped == "---" or (stripped and not stripped.startswith("-")): - in_redirect_block = False + for route in _list_front_matter_redirect_from_routes(text): + redirects[route] = path return redirects +def test_list_front_matter_redirect_from_routes_ignores_body_redirect_marker() -> None: + text = """--- +layout: default +title: Example +redirect_from: + - /legacy-path/ +--- + +Body content. + +redirect_from: + - /not-front-matter/ +""" + + assert _list_front_matter_redirect_from_routes(text) == ["/legacy-path/"] + + +def test_list_front_matter_redirect_from_routes_keeps_entries_after_comments() -> None: + text = """--- +layout: default +title: Example +redirect_from: + # legacy aliases + - /legacy-one/ # keep + + - \"/legacy-two/\" +permalink: /current/ +--- +Body +""" + + assert _list_front_matter_redirect_from_routes(text) == ["/legacy-one/", "/legacy-two/"] + + def test_moved_files_have_redirect_from_entries() -> None: """Every file under bundles/, authoring/, integrations/ that was moved from guides/ should have a redirect_from entry pointing to its old location.""" diff --git a/tests/unit/test_bundle_resource_payloads.py b/tests/unit/test_bundle_resource_payloads.py index 02587cc..22c99c5 100644 --- a/tests/unit/test_bundle_resource_payloads.py +++ b/tests/unit/test_bundle_resource_payloads.py @@ -3,10 +3,13 @@ from __future__ import annotations import hashlib +import shutil +import tarfile from pathlib import Path import pytest import yaml +from specfact_cli.utils import ide_setup from tests.unit._script_test_utils import load_module_from_path @@ -16,6 +19,12 @@ _EXPECTED_PROMPTS: dict[str, tuple[str, ...]] = { + "specfact-backlog": ( + "specfact.backlog-add.md", + "specfact.backlog-daily.md", + "specfact.backlog-refine.md", + "specfact.sync-backlog.md", + ), "specfact-codebase": ( "specfact.01-import.md", "specfact.validate.md", @@ -43,13 +52,34 @@ "github_custom.yaml", ) +_IGNORED_DIR_NAMES = {"__pycache__", ".pytest_cache", ".mypy_cache", ".ruff_cache", "logs"} +_IGNORED_SUFFIXES = {".pyc", ".pyo"} + + +def _build_bundle_artifact(bundle: str, tmp_path: Path) -> Path: + bundle_dir = REPO_ROOT / "packages" / bundle + artifact = tmp_path / f"{bundle}.tar.gz" + with tarfile.open(artifact, "w:gz") as archive: + for path in sorted(bundle_dir.rglob("*")): + if not path.is_file(): + continue + rel = path.relative_to(bundle_dir) + if any(part in _IGNORED_DIR_NAMES for part in rel.parts): + continue + if path.suffix.lower() in _IGNORED_SUFFIXES: + continue + archive.add(path, arcname=f"{bundle}/{rel.as_posix()}") + return artifact + + +def _top_level_prompt_names(prompt_root: Path) -> set[str]: + return {path.name for path in prompt_root.glob("specfact*.md") if path.is_file()} + @pytest.mark.parametrize("bundle,prompts", list(_EXPECTED_PROMPTS.items())) def test_official_bundles_package_expected_prompt_files(bundle: str, prompts: tuple[str, ...]) -> None: root = REPO_ROOT / "packages" / bundle / "resources" / "prompts" - for name in prompts: - path = root / name - assert path.is_file(), f"missing prompt {path}" + assert _top_level_prompt_names(root) == set(prompts) companion = root / _COMPANION assert companion.is_file(), f"missing companion {companion}" @@ -92,11 +122,55 @@ def test_github_custom_seed_includes_bug_parent_hierarchy() -> None: def test_module_package_layout_matches_init_ide_resource_contract() -> None: """Core discovers resources/prompts and resources/templates/... under the module package root.""" backlog = REPO_ROOT / "packages" / "specfact-backlog" + assert (backlog / "resources" / "prompts" / "specfact.backlog-refine.md").is_file() assert (backlog / "resources" / "templates" / "backlog" / "field_mappings" / "ado_default.yaml").is_file() codebase = REPO_ROOT / "packages" / "specfact-codebase" assert (codebase / "resources" / "prompts" / "specfact.01-import.md").is_file() +def test_backlog_artifact_contains_prompt_payload(tmp_path: Path) -> None: + artifact = _build_bundle_artifact("specfact-backlog", tmp_path) + with tarfile.open(artifact, "r:gz") as archive: + names = { + name + for name in archive.getnames() + if name.startswith("specfact-backlog/resources/prompts/") and name.count("/") == 3 and name.endswith(".md") + } + + expected = {f"specfact-backlog/resources/prompts/{prompt}" for prompt in _EXPECTED_PROMPTS["specfact-backlog"]} + assert names == expected + + +def test_core_prompt_discovery_finds_installed_backlog_bundle(tmp_path: Path, monkeypatch: pytest.MonkeyPatch) -> None: + modules_root = tmp_path / "modules" + installed_bundle = modules_root / "specfact-backlog" + shutil.copytree(REPO_ROOT / "packages" / "specfact-backlog", installed_bundle) + repo_path = tmp_path / "repo" + repo_path.mkdir() + + monkeypatch.setattr(ide_setup, "_module_discovery_roots", lambda _repo: [(modules_root, "custom")]) + catalog = ide_setup.discover_prompt_sources_catalog(repo_path, include_package_fallback=False) + + assert "nold-ai/specfact-backlog" in catalog + names = {path.name for path in catalog["nold-ai/specfact-backlog"]} + assert names == set(_EXPECTED_PROMPTS["specfact-backlog"]) + + source_id = "nold-ai/specfact-backlog" + segment = ide_setup.source_id_to_path_segment(source_id) + copied, _ = ide_setup._copy_template_files_to_ide( # pylint: disable=protected-access + repo_path, + "vscode", + list(catalog[source_id]), + source_segment=segment, + write_settings=False, + ) + assert copied + copied_names = {path.name for path in copied} + expected_names = {f"{Path(name).stem}.prompt.md" for name in _EXPECTED_PROMPTS["specfact-backlog"]} + assert copied_names == expected_names + assert all(path.parent.name == segment for path in copied) + + def test_resource_change_changes_signed_payload_checksum(tmp_path: Path) -> None: """Bundled resource files participate in the module integrity payload (resource-aware integrity).""" sign_script = load_module_from_path("sign_modules_payload", SIGN_SCRIPT)