diff --git a/.bg-shell/manifest.json b/.bg-shell/manifest.json new file mode 100644 index 0000000..0637a08 --- /dev/null +++ b/.bg-shell/manifest.json @@ -0,0 +1 @@ +[] \ No newline at end of file diff --git a/CHANGELOG.md b/CHANGELOG.md index 0d23a8a..ef07784 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,6 +5,62 @@ All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [1.33.0] - 2026-04-06 + +Overview: Major upstream sync from GSD v1.33.0 introducing security auditing pipeline, automated documentation generation with codebase verification, dependency analysis, and discuss-phase power mode. Enhanced planner with scope reduction prohibition and decision coverage matrices. Added response language support, worktree isolation control, Kilo runtime support, and schema drift detection across 150 files. + +### Added + +- `/gsd-secure-phase` command and `secure-phase` workflow for retroactive threat mitigation verification of completed phases in `gsd-opencode/commands/gsd/gsd-secure-phase.md` and `gsd-opencode/get-shit-done/workflows/secure-phase.md` +- `gsd-security-auditor` agent for verifying PLAN.md threat model mitigations against implemented code and producing SECURITY.md in `gsd-opencode/agents/gsd-security-auditor.md` +- `/gsd-docs-update` command and `docs-update` workflow for generating, updating, and verifying up to 9 documentation types with codebase-verified accuracy in `gsd-opencode/commands/gsd/gsd-docs-update.md` and `gsd-opencode/get-shit-done/workflows/docs-update.md` +- `gsd-doc-writer` agent for writing and updating project documentation with codebase exploration in `gsd-opencode/agents/gsd-doc-writer.md` +- `gsd-doc-verifier` agent for verifying factual claims in generated docs against the live codebase in `gsd-opencode/agents/gsd-doc-verifier.md` +- `/gsd-analyze-dependencies` command and `analyze-dependencies` workflow for phase dependency graph analysis and ROADMAP.md Depends-on suggestions in `gsd-opencode/commands/gsd/gsd-analyze-dependencies.md` and `gsd-opencode/get-shit-done/workflows/analyze-dependencies.md` +- `discuss-phase-power.md` workflow for power user mode generating all questions upfront into JSON state file with HTML companion UI in `gsd-opencode/get-shit-done/workflows/discuss-phase-power.md` +- `docs.cjs` library for docs-update workflow providing project signal detection, doc inventory with GSD marker detection, and doc tooling detection in `gsd-opencode/get-shit-done/bin/lib/docs.cjs` +- `schema-detect.cjs` library for detecting schema-relevant file changes and verifying database push commands were executed during phases in `gsd-opencode/get-shit-done/bin/lib/schema-detect.cjs` +- `SECURITY.md` template for security audit reports in `gsd-opencode/get-shit-done/templates/SECURITY.md` +- 10 new reference documents: `agent-contracts.md`, `artifact-types.md`, `context-budget.md`, `domain-probes.md`, `gate-prompts.md`, `planner-gap-closure.md`, `planner-reviews.md`, `planner-revision.md`, `revision-loop.md`, and `universal-anti-patterns.md` in `gsd-opencode/get-shit-done/references/` +- `remove-task.json` configuration for task removal patterns during installation in `assets/configs/remove-task.json` +- `v1.33.0.json` supplemental configuration for upstream sync in `assets/configs/v1.33.0.json` +- `manual-update.md` documentation for manual update procedures in `gsd-opencode/docs/manual-update.md` +- `.bg-shell/manifest.json` for background shell configuration + +### Changed + +- Enhanced `gsd-planner` agent with scope reduction prohibition, mandatory decision coverage matrices, threat model generation in PLAN.md, and MCP tool usage guidance in `gsd-opencode/agents/gsd-planner.md` +- Enhanced `gsd-plan-checker` agent with Dimension 7b scope reduction detection, Dimension 11 research resolution check, and improved verification checks in `gsd-opencode/agents/gsd-plan-checker.md` +- Enhanced `gsd-verifier` agent with ROADMAP success criteria merging, improved status determination decision tree, and human_needed status priority in `gsd-opencode/agents/gsd-verifier.md` +- Enhanced `gsd-executor` agent with threat model awareness, threat surface scan in summaries, and MCP tool usage guidance in `gsd-opencode/agents/gsd-executor.md` +- Enhanced `gsd-phase-researcher` agent with claim provenance tagging, assumptions log, and security domain research section in `gsd-opencode/agents/gsd-phase-researcher.md` +- Enhanced `execute-phase` workflow with blocking anti-pattern checks, intra-wave file overlap detection, worktree isolation control, adaptive context window enrichment, and response language propagation in `gsd-opencode/get-shit-done/workflows/execute-phase.md` +- Enhanced `plan-phase` workflow with security threat model gate, auto-chain UI-SPEC generation, response language support, and enriched context for 1M-class models in `gsd-opencode/get-shit-done/workflows/plan-phase.md` +- Enhanced `discuss-phase` workflow with `--power` mode, `--chain` mode, blocking anti-pattern checks, interrupted discussion checkpoint recovery, and response language support in `gsd-opencode/get-shit-done/workflows/discuss-phase.md` +- Enhanced `quick` workflow with `--full` and `--validate` flags replacing `--full` semantics, composable granular flags in `gsd-opencode/get-shit-done/workflows/quick.md` +- Enhanced `autonomous` workflow with `--to N`, `--only N`, and `--interactive` flags for phase range and single-phase execution in `gsd-opencode/get-shit-done/workflows/autonomous.md` +- Enhanced `verify-phase` workflow with test quality audit step covering disabled tests, circular test detection, and expected value provenance in `gsd-opencode/get-shit-done/workflows/verify-phase.md` +- Enhanced `verify-work` workflow with automated UI verification via Playwright-MCP integration in `gsd-opencode/get-shit-done/workflows/verify-work.md` +- Enhanced `reapply-patches` command with three-way comparison, Kilo runtime support, and expanded environment variable detection in `gsd-opencode/commands/gsd/gsd-reapply-patches.md` +- Enhanced `update` workflow with Kilo runtime support, PREFERRED_CONFIG_DIR detection, and expanded runtime candidate list in `gsd-opencode/get-shit-done/workflows/update.md` +- Enhanced `debug` command with `--diagnose` flag for root-cause-only investigation in `gsd-opencode/commands/gsd/gsd-debug.md` +- Enhanced `manager` workflow with configurable per-step passthrough flags from config in `gsd-opencode/get-shit-done/workflows/manager.md` +- Enhanced `review` workflow with CodeRabbit and OpenCode CLI support, environment-based runtime detection, and Antigravity agent compatibility in `gsd-opencode/get-shit-done/workflows/review.md` +- Enhanced `progress` workflow with corrected command ordering (`/new` then command) in `gsd-opencode/get-shit-done/workflows/progress.md` +- Updated `gsd-tools.cjs` with `docs-init`, `check-commit`, `verify schema-drift`, `state planned-phase`, `state validate`, and `state sync` commands in `gsd-opencode/get-shit-done/bin/gsd-tools.cjs` +- Updated `core.cjs` with extracted CONFIG_DEFAULTS constants, workstream session environment keys, planning lock support, and `phaseTokenMatches` helper in `gsd-opencode/get-shit-done/bin/lib/core.cjs` +- Updated `state.cjs` with atomic read-modify-write for concurrent agent safety, progress derivation from disk counts, and diagnostic warnings for field mismatches in `gsd-opencode/get-shit-done/bin/lib/state.cjs` +- Updated `phase.cjs` with project-code-prefixed phase support, custom phase ID handling, and improved phase matching in `gsd-opencode/get-shit-done/bin/lib/phase.cjs` +- Updated `roadmap.cjs` with structured phase search, success criteria extraction, and malformed roadmap detection in `gsd-opencode/get-shit-done/bin/lib/roadmap.cjs` +- Updated `verify.cjs` with STATE.md/ROADMAP.md cross-validation and config field validation checks in `gsd-opencode/get-shit-done/bin/lib/verify.cjs` +- Updated `commands.cjs` with `determinePhaseStatus` introducing "Executed" status, slug length limit, and decimal phase matching in `gsd-opencode/get-shit-done/bin/lib/commands.cjs` +- Updated `frontmatter.cjs` with quote-aware inline array splitting and empty must_haves diagnostic warning in `gsd-opencode/get-shit-done/bin/lib/frontmatter.cjs` +- Updated `config.cjs` with new valid config keys for worktrees, subagent timeout, manager flags, response language, and project code in `gsd-opencode/get-shit-done/bin/lib/config.cjs` +- Updated `profile-output.cjs` with project skills discovery from standard directories and skills section generation for AGENTS.md in `gsd-opencode/get-shit-done/bin/lib/profile-output.cjs` +- Updated `config.json` template with security enforcement, ASVS level, security block-on, and project code fields in `gsd-opencode/get-shit-done/templates/config.json` +- Updated 12 additional workflow files with minor improvements across `gsd-opencode/get-shit-done/workflows/` +- Updated all documentation across 4 locales (en, ja-JP, ko-KR, pt-BR, zh-CN) reflecting new commands and features in `gsd-opencode/docs/` + ## [1.22.2] - 2026-03-30 Overview: Major upstream sync from GSD v1.30.0 adding autonomous execution, fast mode, UI design pipeline, multi-project workspaces, user profiling, forensics, and 25 new slash commands. Full documentation now available in four additional locales (ja-JP, ko-KR, pt-BR, zh-CN). Added `mode: subagent` declarations to all agent definition files. diff --git a/README.md b/README.md index 1a1f0e4..4e999f8 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@
-# GET SHIT DONE for OpenCode. (Based on TÂCHES v1.30.0 - 2026-03-30) +# GET SHIT DONE for OpenCode. (Based on TÂCHES v1.33.0 - 2026-04-04) **A light-weight and powerful meta-prompting, context engineering and spec-driven development system for Claude Code by TÂCHES. (Adapted for OpenCode by rokicool and enthusiasts)** @@ -87,6 +87,12 @@ I just love both GSD and OpenCode. I felt like having GSD available only for Cla — **Roman** +## Version 1.33.0 + +Again we keep up with the original GSDv1 [v1.33.0](https://github.com/gsd-build/get-shit-done/releases/tag/v1.33.0) (2026-04-04). + +And locally there are lots of changes. And the most important one - I removed `task()` calls, since they are not supported by OpenCode and replaced them with the direct call to an agent. + ## Version 1.30.0 We are keeping up with original GSDv1 v1.30.0 (2026-03-30) diff --git a/assets/antipatterns.toml b/assets/antipatterns.toml index 7977e53..32b83b8 100644 --- a/assets/antipatterns.toml +++ b/assets/antipatterns.toml @@ -33,5 +33,5 @@ forbidden_strings = [ "websearch, webfetch, mcp__context7__*", "workflows/set-profile.md", "quality/balanced/budget", - "gsd:" + 'gsd:update\\|GSD update\\|gsd-install', ] diff --git a/assets/bin/gsd-translate-in-place.js b/assets/bin/gsd-translate-in-place.js index 60a5739..e85ea6b 100644 --- a/assets/bin/gsd-translate-in-place.js +++ b/assets/bin/gsd-translate-in-place.js @@ -19,29 +19,31 @@ * 2 Runtime error (file I/O, permissions) */ -import { readFile, writeFile, access, mkdir } from 'node:fs/promises'; -import { resolve, dirname, basename } from 'node:path'; -import { fileURLToPath } from 'node:url'; +import { readFile, writeFile, access, mkdir } from "node:fs/promises"; +import { resolve, dirname, basename } from "node:path"; +import { fileURLToPath } from "node:url"; // Use dynamic import for tinyglobby let glob; try { - const tinyglobby = await import('tinyglobby'); + const tinyglobby = await import("tinyglobby"); glob = tinyglobby.glob; } catch (e) { // Fallback if tinyglobby isn't available glob = async (patterns, options) => { - console.error('Warning: tinyglobby not available. Install with: npm install tinyglobby'); + console.error( + "Warning: tinyglobby not available. Install with: npm install tinyglobby", + ); return []; }; } // Import our modules -import { TextTranslator } from '../lib/translator.js'; -import { CliFormatter } from '../lib/cli.js'; -import { BackupManager } from '../lib/backup-manager.js'; -import { GitChecker } from '../lib/git-checker.js'; -import { Validator } from '../lib/validator.js'; +import { TextTranslator } from "../lib/translator.js"; +import { CliFormatter } from "../lib/cli.js"; +import { BackupManager } from "../lib/backup-manager.js"; +import { GitChecker } from "../lib/git-checker.js"; +import { Validator } from "../lib/validator.js"; const __filename = fileURLToPath(import.meta.url); const __dirname = dirname(__filename); @@ -64,21 +66,21 @@ function parseArgs(args) { apply: false, showDiff: false, useColor: true, - help: false + help: false, }; for (let i = 0; i < args.length; i++) { const arg = args[i]; - if (arg === '--apply') { + if (arg === "--apply") { result.apply = true; - } else if (arg === '--show-diff') { + } else if (arg === "--show-diff") { result.showDiff = true; - } else if (arg === '--no-color') { + } else if (arg === "--no-color") { result.useColor = false; - } else if (arg === '--help' || arg === '-h') { + } else if (arg === "--help" || arg === "-h") { result.help = true; - } else if (!arg.startsWith('--')) { + } else if (!arg.startsWith("--")) { // Collect all non-option arguments as config file paths result.configFiles.push(arg); } @@ -103,7 +105,7 @@ async function loadSingleConfig(configPath) { let content; try { - content = await readFile(resolvedPath, 'utf-8'); + content = await readFile(resolvedPath, "utf-8"); } catch (error) { throw new Error(`Cannot read config file: ${error.message}`); } @@ -139,19 +141,25 @@ function validateConfig(config, configPath) { if (!rule.pattern) { throw new Error(`Rule ${i + 1} must have a "pattern" in ${configPath}`); } - if (typeof rule.replacement !== 'string') { - throw new Error(`Rule ${i + 1} must have a "replacement" string in ${configPath}`); + if (typeof rule.replacement !== "string") { + throw new Error( + `Rule ${i + 1} must have a "replacement" string in ${configPath}`, + ); } } // Validate include option if present if (config.include && config.include.length > 0) { if (!Array.isArray(config.include)) { - throw new Error(`Config "include" must be an array of strings: ${configPath}`); + throw new Error( + `Config "include" must be an array of strings: ${configPath}`, + ); } for (let i = 0; i < config.include.length; i++) { - if (typeof config.include[i] !== 'string') { - throw new Error(`Config "include" item ${i + 1} must be a string in ${configPath}`); + if (typeof config.include[i] !== "string") { + throw new Error( + `Config "include" item ${i + 1} must be a string in ${configPath}`, + ); } } } @@ -164,13 +172,18 @@ function validateConfig(config, configPath) { */ function setConfigDefaults(config) { return { - patterns: config.patterns || ['**/*'], + patterns: config.patterns || ["**/*"], include: config.include || [], - exclude: config.exclude || ['node_modules/**', '.git/**', '.translate-backups/**'], + exclude: config.exclude || [ + "node_modules/**", + ".git/**", + ".translate-backups/**", + ], maxFileSize: config.maxFileSize || 10 * 1024 * 1024, rules: config.rules || [], - _forbidden_strings_after_translation: config._forbidden_strings_after_translation || [], - ...config + _forbidden_strings_after_translation: + config._forbidden_strings_after_translation || [], + ...config, }; } @@ -181,7 +194,7 @@ function setConfigDefaults(config) { */ function mergeConfigs(configs) { if (!Array.isArray(configs) || configs.length === 0) { - throw new Error('At least one config is required'); + throw new Error("At least one config is required"); } if (configs.length === 1) { @@ -211,11 +224,16 @@ function mergeConfigs(configs) { } // Merge _forbidden_strings_after_translation: combine and deduplicate - if (config._forbidden_strings_after_translation && config._forbidden_strings_after_translation.length > 0) { - merged._forbidden_strings_after_translation = [...new Set([ - ...merged._forbidden_strings_after_translation, - ...config._forbidden_strings_after_translation - ])]; + if ( + config._forbidden_strings_after_translation && + config._forbidden_strings_after_translation.length > 0 + ) { + merged._forbidden_strings_after_translation = [ + ...new Set([ + ...merged._forbidden_strings_after_translation, + ...config._forbidden_strings_after_translation, + ]), + ]; } // patterns: use first config's patterns (they're defaults) @@ -227,7 +245,14 @@ function mergeConfigs(configs) { } // Any other custom properties: last config wins - const knownKeys = ['patterns', 'include', 'exclude', 'maxFileSize', 'rules', '_forbidden_strings_after_translation']; + const knownKeys = [ + "patterns", + "include", + "exclude", + "maxFileSize", + "rules", + "_forbidden_strings_after_translation", + ]; for (const key of Object.keys(config)) { if (!knownKeys.includes(key)) { merged[key] = config[key]; @@ -245,7 +270,7 @@ function mergeConfigs(configs) { */ async function loadConfigs(configPaths) { if (!Array.isArray(configPaths) || configPaths.length === 0) { - throw new Error('At least one config file is required'); + throw new Error("At least one config file is required"); } // Load all configs @@ -270,18 +295,22 @@ async function loadConfig(configPath) { } async function ensureCommandNames(apply) { - const commandsDir = resolve(__dirname, '../../gsd-opencode/commands/gsd'); + const commandsDir = resolve(__dirname, "../../gsd-opencode/commands/gsd"); let commandFiles; try { - commandFiles = await glob(['*.md'], { cwd: commandsDir, onlyFiles: true, absolute: true }); + commandFiles = await glob(["*.md"], { + cwd: commandsDir, + onlyFiles: true, + absolute: true, + }); } catch { - console.log('No command files found to check for missing name: field.'); + console.log("No command files found to check for missing name: field."); return { fixed: 0, missing: 0 }; } if (!commandFiles || commandFiles.length === 0) { - console.log('No command files found to check for missing name: field.'); + console.log("No command files found to check for missing name: field."); return { fixed: 0, missing: 0 }; } @@ -289,11 +318,11 @@ async function ensureCommandNames(apply) { let missing = 0; for (const filePath of commandFiles) { - const commandName = basename(filePath, '.md'); + const commandName = basename(filePath, ".md"); let content; try { - content = await readFile(filePath, 'utf-8'); + content = await readFile(filePath, "utf-8"); } catch { continue; } @@ -309,22 +338,26 @@ async function ensureCommandNames(apply) { const newFrontmatter = `name: ${commandName}\n${frontmatter}`; const newContent = content.replace( /^---\n([\s\S]*?)\n---/, - `---\n${newFrontmatter}\n---` + `---\n${newFrontmatter}\n---`, ); if (apply) { - await writeFile(filePath, newContent, 'utf-8'); + await writeFile(filePath, newContent, "utf-8"); console.log(` Fixed missing name: in ${commandName}.md`); fixed++; } else { - console.log(` [dry-run] Would add name: ${commandName} to ${commandName}.md`); + console.log( + ` [dry-run] Would add name: ${commandName} to ${commandName}.md`, + ); } } if (missing > 0) { - console.log(`Command name check: ${missing} file(s) missing name: field${apply ? `, ${fixed} fixed` : ' (dry-run)'}`); + console.log( + `Command name check: ${missing} file(s) missing name: field${apply ? `, ${fixed} fixed` : " (dry-run)"}`, + ); } else { - console.log('All command files have name: in frontmatter.'); + console.log("All command files have name: in frontmatter."); } return { fixed, missing }; @@ -332,8 +365,8 @@ async function ensureCommandNames(apply) { async function discoverGsdSkillReferences(searchPatterns) { const files = await glob(searchPatterns, { - ignore: ['node_modules/**', '.git/**'], - onlyFiles: true + ignore: ["node_modules/**", ".git/**"], + onlyFiles: true, }); if (!files || files.length === 0) { @@ -345,7 +378,7 @@ async function discoverGsdSkillReferences(searchPatterns) { for (const file of files) { try { - const content = await readFile(file, 'utf-8'); + const content = await readFile(file, "utf-8"); let match; while ((match = pattern.exec(content)) !== null) { skillRefs.add(`gsd-${match[1]}`); @@ -360,8 +393,8 @@ async function discoverGsdSkillReferences(searchPatterns) { } async function generateSkillWrappers(commandNames) { - const commandsDir = resolve(__dirname, '../../gsd-opencode/commands/gsd'); - const skillsBaseDir = resolve(__dirname, '../../gsd-opencode/skills'); + const commandsDir = resolve(__dirname, "../../gsd-opencode/commands/gsd"); + const skillsBaseDir = resolve(__dirname, "../../gsd-opencode/skills"); const commandSet = new Set(commandNames); let created = 0; @@ -371,7 +404,7 @@ async function generateSkillWrappers(commandNames) { for (const commandName of commandSet) { const commandFile = resolve(commandsDir, `${commandName}.md`); const skillDir = resolve(skillsBaseDir, commandName); - const skillFile = resolve(skillDir, 'SKILL.md'); + const skillFile = resolve(skillDir, "SKILL.md"); try { await access(commandFile); @@ -389,9 +422,11 @@ async function generateSkillWrappers(commandNames) { // file doesn't exist yet, proceed } - const content = await readFile(commandFile, 'utf-8'); + const content = await readFile(commandFile, "utf-8"); - const frontmatterMatch = content.match(/^---\n([\s\S]*?)\n---\n?([\s\S]*)$/); + const frontmatterMatch = content.match( + /^---\n([\s\S]*?)\n---\n?([\s\S]*)$/, + ); let skillContent; if (frontmatterMatch) { @@ -404,21 +439,25 @@ async function generateSkillWrappers(commandNames) { const newFrontmatter = [ `---`, `name: ${commandName}`, - `description: Implementation of ${commandDisplayName} command`, - `---` - ].join('\n'); + `description: Implementation of one of the commands`, + `---`, + ].join("\n"); - skillContent = newFrontmatter + '\n\n' + body; + skillContent = newFrontmatter + "\n\n" + body; } else { - skillContent = `---\nname: ${commandName}\ndescription: Implementation of ${commandName} command\n---\n\n` + content; + skillContent = + `---\nname: ${commandName}\ndescription: Implementation of one of the commands\n---\n\n` + + content; } await mkdir(skillDir, { recursive: true }); - await writeFile(skillFile, skillContent, 'utf-8'); + await writeFile(skillFile, skillContent, "utf-8"); created++; } - console.log(`Skill wrappers: created ${created}, skipped ${skipped} (already exist), not found ${notFound}`); + console.log( + `Skill wrappers: created ${created}, skipped ${skipped} (already exist), not found ${notFound}`, + ); return { created, skipped, notFound }; } @@ -431,18 +470,23 @@ async function main() { // Create formatter const formatter = new CliFormatter({ useColor: args.useColor, - showDiff: args.showDiff + showDiff: args.showDiff, }); // Show help - if (args.help || (args.configFiles.length === 0 && process.argv.length <= 2)) { + if ( + args.help || + (args.configFiles.length === 0 && process.argv.length <= 2) + ) { console.log(formatter.formatHelp()); process.exit(EXIT_SUCCESS); } // Validate config file argument if (args.configFiles.length === 0) { - console.error(formatter.formatError('At least one config file is required')); + console.error( + formatter.formatError("At least one config file is required"), + ); console.log(formatter.formatHelp()); process.exit(EXIT_VALIDATION_ERROR); } @@ -466,12 +510,12 @@ async function main() { if (config._forbidden_strings_after_translation) { for (const str of config._forbidden_strings_after_translation) { // Escape special regex characters for literal matching - const escaped = str.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'); + const escaped = str.replace(/[.*+?^${}()|[\]\\]/g, "\\$&"); forbiddenPatterns.push({ - pattern: new RegExp(escaped, 'g'), + pattern: new RegExp(escaped, "g"), message: `Found forbidden string "${str}"`, - suggestion: 'Check translation rules', - exceptions: [] + suggestion: "Check translation rules", + exceptions: [], }); } } @@ -485,33 +529,39 @@ async function main() { if (config.include.length > 0) { // Get all files matching include patterns const includedFiles = await glob(config.include, { - onlyFiles: true + onlyFiles: true, }); // Apply exclude patterns to the included files - const excludedSet = new Set(await glob(config.include, { - ignore: config.exclude, - onlyFiles: true - })); - files = includedFiles.filter(f => excludedSet.has(f)); + const excludedSet = new Set( + await glob(config.include, { + ignore: config.exclude, + onlyFiles: true, + }), + ); + files = includedFiles.filter((f) => excludedSet.has(f)); } else { // Use patterns with exclude (existing behavior) files = await glob(config.patterns, { ignore: config.exclude, - onlyFiles: true + onlyFiles: true, }); } } catch (error) { - console.error(formatter.formatError(`Failed to resolve patterns: ${error.message}`)); + console.error( + formatter.formatError(`Failed to resolve patterns: ${error.message}`), + ); process.exit(EXIT_RUNTIME_ERROR); } if (files.length === 0) { - console.log(formatter.formatWarning('No files found matching the patterns')); + console.log( + formatter.formatWarning("No files found matching the patterns"), + ); process.exit(EXIT_VALIDATION_ERROR); } console.log(`Found ${files.length} file(s) to process`); - console.log(''); + console.log(""); // Check git status for uncommitted changes if (args.apply) { @@ -529,20 +579,22 @@ async function main() { const filePath = files[i]; // Show progress - process.stdout.write(formatter.formatProgress(i + 1, files.length, filePath)); + process.stdout.write( + formatter.formatProgress(i + 1, files.length, filePath), + ); // Translate the file const result = await translator.translateFile(filePath); results.push({ filePath, - ...result + ...result, }); if (result.wasModified && !result.error) { modifiedFiles.push({ filePath, - result + result, }); } } @@ -552,31 +604,35 @@ async function main() { // Show diffs if requested if (args.showDiff) { - console.log(''); - console.log(formatter.colorize('═'.repeat(70), 'gray')); - console.log(formatter.colorize(' Diffs', 'bright')); - console.log(formatter.colorize('═'.repeat(70), 'gray')); + console.log(""); + console.log(formatter.colorize("═".repeat(70), "gray")); + console.log(formatter.colorize(" Diffs", "bright")); + console.log(formatter.colorize("═".repeat(70), "gray")); for (const { filePath, result } of modifiedFiles) { - const diff = formatter.formatDiff(filePath, result.original, result.translated); + const diff = formatter.formatDiff( + filePath, + result.original, + result.translated, + ); console.log(diff); } } // Show summary - const summaryResults = results.map(r => ({ + const summaryResults = results.map((r) => ({ filePath: r.filePath, changeCount: r.changeCount, wasModified: r.wasModified, - error: r.error + error: r.error, })); console.log(formatter.formatSummary(summaryResults)); // Apply changes if requested if (args.apply && modifiedFiles.length > 0) { - console.log(formatter.formatWarning('Applying changes...')); - console.log(''); + console.log(formatter.formatWarning("Applying changes...")); + console.log(""); let successCount = 0; let errorCount = 0; @@ -585,32 +641,44 @@ async function main() { // Create backup first const backup = await backupManager.createBackup(filePath); if (!backup.success) { - console.error(formatter.formatError(`Failed to backup ${filePath}: ${backup.error}`)); + console.error( + formatter.formatError( + `Failed to backup ${filePath}: ${backup.error}`, + ), + ); errorCount++; continue; } // Write translated content try { - await writeFile(filePath, result.translated, 'utf-8'); + await writeFile(filePath, result.translated, "utf-8"); console.log(formatter.formatSuccess(`Updated ${filePath}`)); successCount++; } catch (error) { - console.error(formatter.formatError(`Failed to write ${filePath}: ${error.message}`)); + console.error( + formatter.formatError( + `Failed to write ${filePath}: ${error.message}`, + ), + ); errorCount++; } } - console.log(''); + console.log(""); console.log(`Applied changes to ${successCount} file(s)`); if (errorCount > 0) { - console.error(formatter.formatError(`Failed to update ${errorCount} file(s)`)); + console.error( + formatter.formatError(`Failed to update ${errorCount} file(s)`), + ); } // Run post-translation validation - console.log(''); - console.log(formatter.colorize('Running post-translation validation...', 'bright')); + console.log(""); + console.log( + formatter.colorize("Running post-translation validation...", "bright"), + ); const validationResults = []; for (const { filePath } of modifiedFiles) { @@ -621,10 +689,16 @@ async function main() { const summaryReport = validator.formatSummaryReport(validationResults); console.log(summaryReport); - const hasViolations = validationResults.some(r => !r.result.valid && !r.result.error); + const hasViolations = validationResults.some( + (r) => !r.result.valid && !r.result.error, + ); if (hasViolations) { - console.error(formatter.formatError('Post-translation validation failed. Review violations above.')); + console.error( + formatter.formatError( + "Post-translation validation failed. Review violations above.", + ), + ); process.exit(EXIT_VALIDATION_ERROR); } @@ -632,29 +706,35 @@ async function main() { process.exit(EXIT_RUNTIME_ERROR); } } else if (!args.apply && modifiedFiles.length > 0) { - console.log(formatter.formatWarning('This was a dry-run. Use --apply to make changes.')); + console.log( + formatter.formatWarning( + "This was a dry-run. Use --apply to make changes.", + ), + ); } - console.log(''); + console.log(""); - console.log('Checking command files for missing name: in frontmatter...'); + console.log("Checking command files for missing name: in frontmatter..."); await ensureCommandNames(args.apply); - console.log(''); + console.log(""); const skillRefs = await discoverGsdSkillReferences(config.patterns); if (skillRefs.size > 0) { - console.log(`Discovered ${skillRefs.size} gsd skill reference(s): ${[...skillRefs].join(', ')}`); + console.log( + `Discovered ${skillRefs.size} gsd skill reference(s): ${[...skillRefs].join(", ")}`, + ); await generateSkillWrappers([...skillRefs]); } else { - console.log('No gsd skill references found in processed files.'); + console.log("No gsd skill references found in processed files."); } - console.log(formatter.formatSuccess('Done!')); + console.log(formatter.formatSuccess("Done!")); process.exit(EXIT_SUCCESS); } // Run main -main().catch(error => { +main().catch((error) => { console.error(`\nError: ${error.message}`); process.exit(EXIT_RUNTIME_ERROR); }); diff --git a/assets/configs/config.json b/assets/configs/config.json index 6d49716..177c4f3 100644 --- a/assets/configs/config.json +++ b/assets/configs/config.json @@ -449,8 +449,8 @@ "description": "gsd-reapply-patches.md" }, { - "pattern": "Manage parallel workstreams for concurrent milestone work.", - "replacement": "\nManage parallel workstreams for concurrent milestone work.\n", + "pattern": "/gsd-workstreams\n\nManage parallel workstreams for concurrent milestone work.", + "replacement": "/gsd-workstreams\n\n\n Manage parallel workstreams for concurrent milestone work.\n", "description": "gsd-reapply-patches.md" }, { @@ -493,6 +493,18 @@ "replacement": "description: \"Two models: one for reseach and planing, other for execution and verification\"", "description": "Use - two models: advanced and less advanced - profile" }, + { + "pattern": "1. **Never** read agent definition files (`agents/*.md`) -- `subagent_type` auto-loads them.", + "replacement": "1. **Never** read agent definition files (`agents/*.md`) -- `@` call auto-loads them.", + "description": "gsd-opencode/get-shit-done/references/universal-anti-patterns.md" + }, + + { + "pattern": "ALWAYS use `subagent_type: \"gsd-{agent}\"` (e.g., `gsd-phase-researcher`, `gsd-executor`, `gsd-planner`)", + "replacement": "ALWAYS use `@gsd-{agent}` call (e.g., `@gsd-phase-researcher`, `@gsd-executor`, `@gsd-planner`)", + "description": "gsd-opencode/get-shit-done/references/universal-anti-patterns.md" + }, + { "pattern": "label: \"Budget\"", "replacement": "label: \"Genius (most flexible)\"", diff --git a/assets/configs/remove-task.json b/assets/configs/remove-task.json new file mode 100644 index 0000000..6c568af --- /dev/null +++ b/assets/configs/remove-task.json @@ -0,0 +1,213 @@ +{ + "_description": "Replace task() calls with @subagent_type shorthand syntax for gsd subagent types", + "_source": "Claude Code (get-shit-done-cc)", + "_target": "OpenCode (gsd-opencode)", + "_usage": "node assets/bin/gsd-translate-in-place.js assets/configs/remove-task.json [--apply] [--show-diff]", + "include": [ + "gsd-opencode/commands/gsd/**", + "gsd-opencode/get-shit-done/references/**", + "gsd-opencode/get-shit-done/templates/**", + "gsd-opencode/get-shit-done/workflows/**" + ], + "exclude": [ + "node_modules/**", + "gsd-opencode/node_modules/**", + ".git/**", + ".translate-backups/**", + "assets/**", + "**/*.bak", + "**/*.png", + "**/*.jpg", + "**/*.gif", + "**/*.ico", + "**/*.svg", + "original/**" + ], + "maxFileSize": 10485760, + "rules": [ + { + "_comment": "1a. Multiline task() with variable prompt, model, description (standard 4-line format)", + "pattern": "task\\(\\s*\\r?\\n\\s*prompt=(\\w+),\\s*\\r?\\n\\s*subagent_type=\"(gsd-[^\"]+)\",\\s*\\r?\\n\\s*model=\"[^\"]*\",\\s*\\r?\\n\\s*description=\"[^\"]*\"\\s*\\r?\\n\\s*\\)", + "replacement": "@$2 $1", + "isRegex": true, + "caseSensitive": false, + "description": "Replace multiline task() with variable prompt to @subagent shorthand" + }, + { + "_comment": "1b. Multiline task() with quoted prompt, model, description", + "pattern": "task\\(\\s*\\r?\\n\\s*prompt=\"([^\"]+)\",\\s*\\r?\\n\\s*subagent_type=\"(gsd-[^\"]+)\",\\s*\\r?\\n\\s*model=\"[^\"]*\",\\s*\\r?\\n\\s*description=\"[^\"]*\"\\s*\\r?\\n\\s*\\)", + "replacement": "@$2 \"$1\"", + "isRegex": true, + "caseSensitive": false, + "description": "Replace multiline task() with quoted prompt to @subagent shorthand" + }, + { + "_comment": "2a. Single-line task() with variable prompt, model, description", + "pattern": "task\\(\\s*prompt=(\\w+)\\s*,\\s*subagent_type=\"(gsd-[^\"]+)\"\\s*,\\s*model=\"[^\"]*\"\\s*,\\s*description=\"[^\"]*\"\\s*\\)", + "replacement": "@$2 $1", + "isRegex": true, + "caseSensitive": false, + "description": "Replace single-line task() with variable prompt to @subagent shorthand" + }, + { + "_comment": "2b. Single-line task() with quoted prompt, model, description", + "pattern": "task\\(\\s*prompt=\"([^\"]+)\"\\s*,\\s*subagent_type=\"(gsd-[^\"]+)\"\\s*,\\s*model=\"[^\"]*\"\\s*,\\s*description=\"[^\"]*\"\\s*\\)", + "replacement": "@$2 \"$1\"", + "isRegex": true, + "caseSensitive": false, + "description": "Replace single-line task() with quoted prompt to @subagent shorthand" + }, + { + "_comment": "3. task() with triple-quoted prompt: task(prompt=\"\"\"...\"\"\", subagent_type=\"gsd-xxx\", ...)", + "pattern": "task\\(\\s*prompt=\"\"\"([\\s\\S]*?)\"\"\"\\s*,\\s*subagent_type=\"(gsd-[^\"]+)\"\\s*,\\s*model=\"[^\"]*\"\\s*,\\s*description=\"[^\"]*\"\\s*\\)", + "replacement": "@$2 \"\"\"$1\"\"\"", + "isRegex": true, + "caseSensitive": false, + "description": "Replace task() with triple-quoted prompt to @subagent shorthand" + }, + { + "_comment": "4a. Multiline task() with variable prompt, description only (no model)", + "pattern": "task\\(\\s*\\r?\\n\\s*prompt=(\\w+),\\s*\\r?\\n\\s*subagent_type=\"(gsd-[^\"]+)\",\\s*\\r?\\n\\s*description=\"[^\"]*\"\\s*\\r?\\n\\s*\\)", + "replacement": "@$2 $1", + "isRegex": true, + "caseSensitive": false, + "description": "Replace multiline task() without model to @subagent shorthand" + }, + { + "_comment": "4b. Single-line task() with variable prompt, description only (no model)", + "pattern": "task\\(\\s*prompt=(\\w+)\\s*,\\s*subagent_type=\"(gsd-[^\"]+)\"\\s*,\\s*description=\"[^\"]*\"\\s*\\)", + "replacement": "@$2 $1", + "isRegex": true, + "caseSensitive": false, + "description": "Replace single-line task() without model to @subagent shorthand" + }, + { + "_comment": "5a. Multiline task() with variable prompt, extra args (isolation etc), description, no model", + "pattern": "task\\(\\s*\\r?\\n\\s*prompt=([^,\\n]+),\\s*\\r?\\n\\s*subagent_type=\"(gsd-[^\"]+)\",\\s*\\r?\\n(?:\\s*\\w+=\"?[^\"]*\"?\\s*,?\\s*\\r?\\n)*\\s*description=\"[^\"]*\"\\s*\\r?\\n\\s*\\)", + "replacement": "@$2 $1", + "isRegex": true, + "caseSensitive": false, + "description": "Replace multiline task() with extra args, no model, to @subagent shorthand" + }, + { + "_comment": "6. task() with subagent_type first and triple-quoted prompt", + "pattern": "task\\(subagent_type=\"(gsd-[^\"]+)\",\\s*prompt=\"\"\"([\\s\\S]*?)\"\"\"\\s*\\)", + "replacement": "@$1 \"\"\"$2\"\"\"", + "isRegex": true, + "caseSensitive": false, + "description": "Replace task() with subagent_type first and triple-quoted prompt" + }, + { + "_comment": "7. Multiline task() with expression prompt (containing + operator)", + "pattern": "task\\(\\s*\\r?\\n\\s*prompt=([^,]+\\+[^,]+),\\s*\\r?\\n\\s*subagent_type=\"(gsd-[^\"]+)\",", + "replacement": "@$2 $1", + "isRegex": true, + "caseSensitive": false, + "description": "Replace task() with expression prompt to @subagent shorthand" + }, + { + "_comment": "8. Multiline task() with variable prompt and trailing comment", + "pattern": "task\\(\\s*\\r?\\n\\s*prompt=(\\w+),\\s*(?:#[^\\n]*)?\\s*\\r?\\n\\s*subagent_type=\"(gsd-[^\"]+)\",\\s*\\r?\\n\\s*description=\"[^\"]*\"\\s*\\r?\\n\\s*\\)", + "replacement": "@$2 $1", + "isRegex": true, + "caseSensitive": false, + "description": "Replace multiline task() with comment to @subagent shorthand" + }, + { + "_comment": "9. Multiline task() with variable prompt, trailing comment, model, description", + "pattern": "task\\(\\s*\\r?\\n\\s*prompt=(\\w+),\\s*(?:#[^\\n]*)?\\s*\\r?\\n\\s*subagent_type=\"(gsd-[^\"]+)\",\\s*\\r?\\n\\s*model=\"[^\"]*\",\\s*\\r?\\n\\s*description=\"[^\"]*\"\\s*\\r?\\n\\s*\\)", + "replacement": "@$2 $1", + "isRegex": true, + "caseSensitive": false, + "description": "Replace multiline task() with comment and model to @subagent shorthand" + }, + { + "_comment": "10. Multiline task() with quoted prompt containing newlines, subagent_type, model (no description)", + "pattern": "task\\(\\s*\\r?\\n\\s*prompt=\"([^\"]+\\n[^\"]*)\",\\s*\\r?\\n\\s*subagent_type=\"(gsd-[^\"]+)\",\\s*\\r?\\n\\s*model=\"[^\"]*\"\\s*\\r?\\n\\s*\\)", + "replacement": "@$2 \"$1\"", + "isRegex": true, + "caseSensitive": false, + "description": "Replace multiline task() with multiline quoted prompt, no description" + }, + { + "_comment": "11. Multiline task() with quoted prompt containing newlines, subagent_type, model, description", + "pattern": "task\\(\\s*\\r?\\n\\s*prompt=\"([^\"]+\\n[^\"]*)\",\\s*\\r?\\n\\s*subagent_type=\"(gsd-[^\"]+)\",\\s*\\r?\\n\\s*model=\"[^\"]*\",\\s*\\r?\\n\\s*description=\"[^\"]*\"\\s*\\r?\\n\\s*\\)", + "replacement": "@$2 \"$1\"", + "isRegex": true, + "caseSensitive": false, + "description": "Replace multiline task() with multiline quoted prompt to @subagent shorthand" + }, + { + "_comment": "12. Multiline task() with subagent_type first, model, extra args, description, quoted prompt at end", + "pattern": "task\\(\\s*\\r?\\n\\s*subagent_type=\"(gsd-[^\"]+)\",\\s*\\r?\\n\\s*model=\"[^\"]*\",\\s*\\r?\\n(?:\\s*\\w+=\"?[^\"]*\"?\\s*,\\s*\\r?\\n)*\\s*description=\"[^\"]*\",\\s*\\r?\\n\\s*prompt=\"([^\"]+)\"\\s*\\r?\\n\\s*\\)", + "replacement": "@$1 \"$2\"", + "isRegex": true, + "caseSensitive": false, + "description": "Replace multiline task() with subagent_type first, prompt last to @subagent shorthand" + }, + { + "_comment": "13. Multiline task() with subagent_type first, extra args, multiline quoted prompt at end (map-codebase pattern)", + "pattern": "task\\(\\s*\\r?\\n\\s*subagent_type=\"(gsd-[^\"]+)\",\\s*\\r?\\n\\s*model=\"[^\"]*\",\\s*\\r?\\n(?:\\s*\\w+=.+\\s*\\r?\\n)*\\s*description=\"[^\"]*\",\\s*\\r?\\n\\s*prompt=\"([^\"]+)\"\\s*\\r?\\n\\s*\\)", + "replacement": "@$1 \"$2\"", + "isRegex": true, + "caseSensitive": false, + "description": "Replace task() subagent_type first with prompt last (map-codebase pattern)" + }, + { + "_comment": "14. Indented task() with subagent_type first, model, isolation, multiline quoted prompt (execute-phase deep indent)", + "pattern": "task\\(\\s*\\r?\\n\\s+subagent_type=\"(gsd-[^\"]+)\",\\s*\\r?\\n\\s+model=\"[^\"]*\",\\s*\\r?\\n\\s+isolation=\"[^\"]*\",\\s*\\r?\\n\\s+prompt=\"([\\s\\S]*?)\"\\s*\\r?\\n\\s+\\)", + "replacement": "@$1 \"$2\"", + "isRegex": true, + "caseSensitive": false, + "description": "Replace indented task() with subagent_type first, isolation, prompt last (execute-phase deep indent)" + }, + { + "_comment": "15. Multiline task() with quoted prompt containing newlines, subagent_type, model, extra args, description", + "pattern": "task\\(\\s*\\r?\\n\\s*prompt=\"([^\"]+)\",\\s*\\r?\\n\\s*subagent_type=\"(gsd-[^\"]+)\",\\s*\\r?\\n\\s*model=\"[^\"]*\",\\s*\\r?\\n\\s*\\w+=\"[^\"]*\",\\s*\\r?\\n\\s*description=\"[^\"]*\"\\s*\\r?\\n\\s*\\)", + "replacement": "@$2 \"$1\"", + "isRegex": true, + "caseSensitive": false, + "description": "Replace multiline task() with extra arg (isolation) between model and description" + }, + { + "_comment": "16. Multiline task() with expression prompt (multi-line + concatenation), subagent_type, model, description", + "pattern": "task\\(\\s*\\r?\\n\\s*prompt=\"([^\"]+)\"\\s*\\+[\\s\\S]+?,\\s*\\r?\\n\\s*subagent_type=\"(gsd-[^\"]+)\",\\s*\\r?\\n\\s*model=\"[^\"]*\",\\s*\\r?\\n\\s*description=\"[^\"]*\"\\s*\\r?\\n\\s*\\)", + "replacement": "@$2 \"$1\"", + "isRegex": true, + "caseSensitive": false, + "description": "Replace multiline task() with multi-line expression prompt to @subagent shorthand" + }, + { + "_comment": "17. Prose: 'spawn a task() in parallel' → @subagent reference", + "pattern": "spawn a task\\(\\) in parallel", + "replacement": "spawn a relevant subagent using `@subagent prompt` syntax in parallel", + "isRegex": true, + "caseSensitive": true, + "description": "Replace prose 'spawn a task() in parallel' with @subagent reference" + }, + { + "_comment": "18. task() with subagent_type=\"general\" referencing gsd-advisor-researcher → @gsd-advisor-researcher", + "pattern": "task\\(\\s*\\r?\\n\\s*prompt=\"(First, read @[\\s\\S]*?\\$\\{AGENT_SKILLS_ADVISOR\\})\",\\s*\\r?\\n\\s*subagent_type=\"general\",\\s*\\r?\\n\\s*model=\"[^\"]*\",\\s*\\r?\\n\\s*description=\"[^\"]*\"\\s*\\r?\\n\\s*\\)", + "replacement": "@gsd-advisor-researcher \"$1\"", + "isRegex": true, + "caseSensitive": true, + "description": "Replace task(subagent_type=general) with gsd-advisor-researcher prompt to @gsd-advisor-researcher shorthand" + }, + { + "_comment": "19. Prose: 'All task() calls spawn simultaneously' → subagent reference", + "pattern": "All task\\(\\) calls spawn simultaneously", + "replacement": "All subagents spawn simultaneously", + "isRegex": true, + "caseSensitive": true, + "description": "Replace prose 'All task() calls spawn simultaneously' with subagent reference" + }, + { + "_comment": "20. task(subagent_type=\"general\", prompt=\"\"\"...\"\"\") with triple-quoted prompt → @general", + "pattern": "task\\(subagent_type=\"general\",\\s*prompt=\"\"\"([\\s\\S]*?)\"\"\"\\s*\\)", + "replacement": "@general \"\"\"$1\"\"\"", + "isRegex": true, + "caseSensitive": true, + "description": "Replace task(subagent_type=general, prompt=triple-quoted) with @general shorthand" + } + ] +} diff --git a/assets/configs/v1.33.0.json b/assets/configs/v1.33.0.json new file mode 100644 index 0000000..4b9b7ab --- /dev/null +++ b/assets/configs/v1.33.0.json @@ -0,0 +1,18 @@ +{ + "_description": "Supplemental translation rules for v1.33.0 -- fixes remaining forbidden strings", + "include": ["gsd-opencode/**"], + "exclude": [ + "node_modules/**", + ".git/**", + ".translate-backups/**", + "**/oc-*", + "**/*-oc-*" + ], + "rules": [ + { + "pattern": "`general-purpose`", + "replacement": "`general`", + "description": "Fix: backtick-quoted general-purpose agent type name to general" + } + ] +} diff --git a/assets/prompts/M-COPY-AND-TRANSLATE.md b/assets/prompts/M-COPY-AND-TRANSLATE.md index 4ec09a9..549a985 100644 --- a/assets/prompts/M-COPY-AND-TRANSLATE.md +++ b/assets/prompts/M-COPY-AND-TRANSLATE.md @@ -11,7 +11,8 @@ This project maintains `gsd-opencode/` as an adapted fork of the upstream `origi 1. **Copy** -- pull latest files from the submodule into `gsd-opencode/` 2. **Translate** -- replace Claude Code naming, paths, tools, and commands with OpenCode equivalents 3. **Add Agents Mode** -- inject `mode: subagent` into all agent definition files -4. **Validate** -- ensure zero forbidden strings remain in the translated files +4. **Modify Agent Calls** -- replace `task()` calls with `@subagent_type` shorthand syntax +5. **Validate** -- ensure zero forbidden strings remain in the translated files ### Tools @@ -94,6 +95,96 @@ Produce a brief report with: The primary config is `assets/configs/config.json`. It contains all translation rules (URLs, paths, commands, tool names, profile names, colors, HTML tags, etc.). +--- +## Step 2A: Modify Agent Calls + +Replace `task()` function calls with the OpenCode `@subagent_type` shorthand syntax. The config `assets/configs/remove-task.json` contains regex rules that match the most common `task()` call patterns found in commands, workflows, references, and templates. + +**Scope**: Only files in `gsd-opencode/commands/gsd/`, `gsd-opencode/get-shit-done/references/`, `gsd-opencode/get-shit-done/templates/`, and `gsd-opencode/get-shit-done/workflows/`. + +**What gets replaced** (examples): + +``` +# Before (multiline variable prompt) +task( + prompt=filled_prompt, + subagent_type="gsd-phase-researcher", + model="{researcher_model}", + description="Research Phase {phase}" +) + +# After +@gsd-phase-researcher filled_prompt +``` + +``` +# Before (single-line quoted prompt) +task(prompt="Stack research", subagent_type="gsd-project-researcher", model="{researcher_model}", description="Stack research") + +# After +@gsd-project-researcher "Stack research" +``` + +``` +# Before (triple-quoted prompt) +task( + prompt="""...""", + subagent_type="gsd-planner", + model="{planner_model}", + description="Plan Phase {phase}" +) + +# After +@gsd-planner """...""" +``` + +**What does NOT get replaced**: Prose references to `subagent_type=` inside inline code spans (e.g., `` `subagent_type="gsd-executor"` ``) are left untouched -- they are documentation, not function calls. + +### 2Aa. Preview + +```bash +node assets/bin/gsd-translate-in-place.js assets/configs/remove-task.json --show-diff +``` + +**What to check:** +- Files affected are only within the scoped directories (commands, references, templates, workflows) +- Each diff shows a `task(...)` call replaced with an `@subagent_type ...` shorthand +- No `oc-` or `-oc-` files appear in the output +- Prose references (inside backticks) are NOT modified +- Verify the extracted prompt value is correct for each replacement (no truncated prompts) + +**Common issues to watch for:** +- **Truncated multiline prompts**: Some `task()` calls have massive multi-line prompts (50+ lines). If a replacement looks like it captured only part of the prompt, note it for manual review. +- **Expression prompts**: Prompts using string concatenation (`+`) may only capture the first segment -- verify these replacements look correct. +- **Unmatched calls**: Not every `task()` variant is covered by regex. Complex patterns (deeply indented, extra args in unusual positions) may need manual conversion. + +### 2Ab. Apply + +```bash +node assets/bin/gsd-translate-in-place.js assets/configs/remove-task.json --apply +``` + +### 2Ac. Verify + +```bash +node assets/bin/gsd-translate-in-place.js assets/configs/remove-task.json +``` + +**Expected output**: 0 changes remaining (all matching `task()` calls already replaced). + +### 2Ad. Manual review of unmatched calls + +After apply, search for any remaining `task(` calls with `gsd-` subagent types that were not auto-replaced: + +```bash +grep -rn 'subagent_type="gsd-' gsd-opencode/commands/gsd/ gsd-opencode/get-shit-done/references/ gsd-opencode/get-shit-done/templates/ gsd-opencode/get-shit-done/workflows/ || echo "No remaining task() calls" +``` + +If results are found: +- Check if each is a prose reference (inside backticks or plain text) -- these are correct to leave as-is +- If any are actual `task()` calls that the regex missed, convert them manually following the pattern: `@subagent_type prompt_value` + + --- ## Step 2B: Add Agents Mode @@ -263,6 +354,12 @@ When the workflow completes (forbidden strings check passes), produce this repor - Files with mode added: N - Files skipped (already had mode): N +### Step 2C: Modify Agent Calls +- Config used: assets/configs/remove-task.json +- Files modified: N +- Total task() calls replaced: N +- Manual conversions needed: N (list files if any) + ### Step 4: Validate - Forbidden strings check: PASSED - Iterations required: N diff --git a/gsd-opencode/agents/gsd-debugger.md b/gsd-opencode/agents/gsd-debugger.md index 8397a4e..58f077c 100644 --- a/gsd-opencode/agents/gsd-debugger.md +++ b/gsd-opencode/agents/gsd-debugger.md @@ -10,7 +10,6 @@ tools: grep: true glob: true websearch: true -permissionMode: acceptEdits color: "#FFA500" # hooks: # PostToolUse: diff --git a/gsd-opencode/agents/gsd-doc-verifier.md b/gsd-opencode/agents/gsd-doc-verifier.md new file mode 100644 index 0000000..bacdc82 --- /dev/null +++ b/gsd-opencode/agents/gsd-doc-verifier.md @@ -0,0 +1,207 @@ +--- +name: gsd-doc-verifier +description: Verifies factual claims in generated docs against the live codebase. Returns structured JSON per doc. +mode: subagent +tools: + read: true + write: true + bash: true + grep: true + glob: true +color: "#FFA500" +# hooks: +# PostToolUse: +# - matcher: "write" +# hooks: +# - type: command +# command: "npx eslint --fix $FILE 2>/dev/null || true" +--- + + +You are a GSD doc verifier. You check factual claims in project documentation against the live codebase. + +You are spawned by the `/gsd-docs-update` workflow. Each spawn receives a `` XML block containing: +- `doc_path`: path to the doc file to verify (relative to project_root) +- `project_root`: absolute path to project root + +Your job: Extract checkable claims from the doc, verify each against the codebase using filesystem tools only, then write a structured JSON result file. Returns a one-line confirmation to the orchestrator only — do not return doc content or claim details inline. + +**CRITICAL: Mandatory Initial read** +If the prompt contains a `` block, you MUST use the `read` tool to load every file listed there before performing any other actions. This is your primary context. + + + +Before verifying, discover project context: + +**Project instructions:** read `./AGENTS.md` if it exists in the working directory. Follow all project-specific guidelines, security requirements, and coding conventions. + +**Project skills:** Check `.OpenCode/skills/` or `.agents/skills/` directory if either exists: +1. List available skills (subdirectories) +2. read `SKILL.md` for each skill (lightweight index ~130 lines) +3. Load specific `rules/*.md` files as needed during verification +4. Do NOT load full `AGENTS.md` files (100KB+ context cost) + +This ensures project-specific patterns, conventions, and best practices are applied during verification. + + + +Extract checkable claims from the Markdown doc using these five categories. Process each category in order. + +**1. File path claims** +Backtick-wrapped tokens containing `/` or `.` followed by a known extension. + +Extensions to detect: `.ts`, `.js`, `.cjs`, `.mjs`, `.md`, `.json`, `.yaml`, `.yml`, `.toml`, `.txt`, `.sh`, `.py`, `.go`, `.rs`, `.java`, `.rb`, `.css`, `.html`, `.tsx`, `.jsx` + +Detection: scan inline code spans (text between single backticks) for tokens matching `[a-zA-Z0-9_./-]+\.(ts|js|cjs|mjs|md|json|yaml|yml|toml|txt|sh|py|go|rs|java|rb|css|html|tsx|jsx)`. + +Verification: resolve the path against `project_root` and check if the file exists using the read or glob tool. Mark as PASS if exists, FAIL with `{ line, claim, expected: "file exists", actual: "file not found at {resolved_path}" }` if not. + +**2. Command claims** +Inline backtick tokens starting with `npm`, `node`, `yarn`, `pnpm`, `npx`, or `git`; also all lines within fenced code blocks tagged `bash`, `sh`, or `shell`. + +Verification rules: +- `npm run