Skip to content

docs(plan-reviews): add Execution-settings recommendation preamble#1229

Open
shanbhagp wants to merge 1 commit intogarrytan:mainfrom
shanbhagp:add-execution-settings-preamble
Open

docs(plan-reviews): add Execution-settings recommendation preamble#1229
shanbhagp wants to merge 1 commit intogarrytan:mainfrom
shanbhagp:add-execution-settings-preamble

Conversation

@shanbhagp
Copy link
Copy Markdown

What

Adds a short Execution settings recommendation blockquote at the top of /plan-ceo-review and /plan-eng-review so users pick the right compute tier before invoking the skill.

Execution settings recommendation
Bounded strategic review: assess a plan against business / product /
investor framing. Second-tier compute (e.g., Opus Extra high) is
right. Top-tier compute is warranted only when the plan involves
material strategic ambiguity that benefits from open-ended exploration.

Why

Claude Code now exposes meaningfully different compute tiers (Opus Max, Opus Extra high, Sonnet, etc.). Some skills genuinely benefit from the top tier; many are bounded enough that mid-tier handles them at a fraction of the cost. Without per-skill guidance, users either over-spend on routine reviews or under-spend on review tasks that need headroom.

Wording is task-shape framing ("bounded strategic review", "second-tier compute (e.g., Opus Extra high)") rather than pinned model names, so the guidance survives model-version churn (Opus 4.7 → 4.8 → 5 etc.) without doc maintenance.

The pattern was developed across a set of project-local skills (six narrative-deck skills with similarly-shaped recommendations); the eng + CEO plan reviews were the natural upstream candidates — stable, non-domain-specific, and used widely.

Files

  • plan-ceo-review/SKILL.md.tmpl — preamble after H1, before ## Philosophy
  • plan-eng-review/SKILL.md.tmpl — preamble after H1, before "Review this plan thoroughly..."
  • plan-ceo-review/SKILL.md and plan-eng-review/SKILL.md — regenerated via bun run gen:skill-docs (all three hosts: Claude, Codex, Factory)

Diff is purely additive: 4 files, +26 lines.

Reviewer notes

  • Skill-docs freshness gate passes locally — bun run gen:skill-docs && git diff --exit-code is clean.
  • Templates and rendered docs are committed together so the skill-docs.yml workflow lands green on first build.
  • No version-gate.yml trigger — VERSION / CHANGELOG.md / package.json untouched. Happy to bump if your release cadence wants this in a versioned drop.
  • Posture matches your existing template structure (blockquote sits between the H1 and the first body section, not embedded in {{PREAMBLE}} macro space).

Aside (separate issue, not this PR)

While regenerating on Windows I noticed the repo has no .gitattributes, so on Windows clones with core.autocrlf=true (the default), bun run gen:skill-docs produces CRLF-line files where transformFrontmatter()'s strip regex ^${field}:\s*.*\n fails to match — sensitive: true leaks through into Claude SKILL.md outputs. Worked around it locally by setting core.autocrlf=false. Happy to file a separate issue or send a tiny fix PR (a one-line .gitattributes would close it).

…w and plan-eng-review

Adds a brief "Execution settings recommendation" blockquote at the top
of each skill's prompt body so users pick the right compute tier for
the task before invoking the skill.

The wording is intentionally task-shape framing ("bounded strategic
review", "second-tier compute (e.g., Opus Extra high)") rather than
pinned model names, so the guidance survives model-version changes.

Both templates and the regenerated rendered docs are included; CI's
skill-docs freshness check passes locally.
@shanbhagp
Copy link
Copy Markdown
Author

One observation that came up while writing this PR — surfacing it here as an offer rather than expanding scope:

The same shape of guidance probably belongs on most of the bounded review skills (slide-review, design-review, plan-design-review, qa-only, review, cso, landing-report). The wording would differ per skill, but the slot is the same: "this is a bounded analysis, second-tier compute is right; top-tier only when X."

I noticed you already have preamble-tier: in the frontmatter for tiered preamble macros. A natural extension would be a compute-tier-recommendation: field that renders uniformly across skills — same content slot, machine-checkable, no per-PR copy-paste drift. But I genuinely don't know your mental model here:

  • If you'd prefer that direction, happy to send a follow-up PR proposing the field + renderer changes + rolling it out to the bounded review skills.
  • If you'd prefer per-skill human curation (which has its own merits — tighter wording, more judgment per skill), I can expand this PR to cover the rest of the bounded reviewers in the same blockquote style.
  • Or merge this as-is and we leave the broader rollout for later — also fine.

No need to decide now; flagging in case it's useful context for review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant