diff --git a/README.md b/README.md
index 0975a67..415b7f5 100644
--- a/README.md
+++ b/README.md
@@ -1,13 +1,13 @@
# token-optimization
A customer-facing engagement for context management and token optimization
-This repository contains source material for a practical developer workshop on context management, token optimization, agent customization, tool/MCP hygiene, model choice, AI evals, usage visibility, and sustainable team practices.
+This repository contains source material for a practical developer workshop on token optimization, context engineering, agent customization, tool/MCP hygiene, model choice, AI evals, usage visibility, and sustainable team practices.
-Start with [`labs/README.md`](labs/README.md) for 1-hour, 2-hour, and 4-hour delivery outlines, then use the chapter files in order or as modular source material.
+Start with [`labs/README.md`](labs/README.md) for 1-hour, 2-hour, and 4-hour delivery outlines, then use the track labs, deck outline, surface matrix, templates, exercises, and worksheets as modular source material.
## Sample web app
-This repository also includes a static JavaScript sample app for GitHub Copilot Usage-Based Budgeting.
+This repository also includes a static JavaScript sample app for GitHub Copilot Usage-Based Budgeting. It includes budget-scope guidance and a per-surface estimator that uses user-supplied token rates instead of hardcoded future pricing.
To run it locally:
@@ -57,16 +57,50 @@ The repo uses small project skills for workflows that should only load when rele
- Give attendees repeatable habits they can apply in other projects.
- Introduce eval-driven improvement so teams can measure whether changes help.
+## Curriculum tracks
+
+| Track | Labs |
+| --- | --- |
+| VS Code/IDE users | [`00`](labs/00-foundations.md), [`01`](labs/01-ide-context-and-prompt-flow.md), [`02`](labs/02-ide-instructions-tools-and-mcp.md), [`07`](labs/07-measurement-billing-and-governance.md), [`08`](labs/08-applied-repo-review-and-adoption.md) |
+| GitHub Copilot CLI users | [`00`](labs/00-foundations.md), [`03`](labs/03-cli-context-and-tool-output.md), [`04`](labs/04-cli-agents-tools-and-cost-control.md), [`07`](labs/07-measurement-billing-and-governance.md), [`08`](labs/08-applied-repo-review-and-adoption.md) |
+| GitHub.com/code review users | [`00`](labs/00-foundations.md), [`05`](labs/05-github-web-context-and-coding-agent.md), [`06`](labs/06-github-code-review-and-pr-hygiene.md), [`07`](labs/07-measurement-billing-and-governance.md), [`08`](labs/08-applied-repo-review-and-adoption.md) |
+| Full cross-surface practitioner | [`00`](labs/00-foundations.md) through [`08`](labs/08-applied-repo-review-and-adoption.md) |
+
## Contents
-- [`labs/README.md`](labs/README.md) — overview, prerequisites, and timed agendas
-- [`labs/01-context-management-basics.md`](labs/01-context-management-basics.md)
-- [`labs/02-instructions-and-agent-customizations.md`](labs/02-instructions-and-agent-customizations.md)
-- [`labs/03-mcp-and-tool-optimization.md`](labs/03-mcp-and-tool-optimization.md)
-- [`labs/04-chat-session-management.md`](labs/04-chat-session-management.md)
-- [`labs/05-model-choice.md`](labs/05-model-choice.md)
-- [`labs/06-chat-history-and-memory.md`](labs/06-chat-history-and-memory.md)
-- [`labs/07-usage-and-billing-visibility.md`](labs/07-usage-and-billing-visibility.md)
-- [`labs/08-ai-evals-and-observability.md`](labs/08-ai-evals-and-observability.md)
-- [`labs/09-ideal-workshop-repo.md`](labs/09-ideal-workshop-repo.md)
-- [`labs/10-next-steps-and-extra-topics.md`](labs/10-next-steps-and-extra-topics.md)
+- [`labs/README.md`](labs/README.md) - overview, prerequisites, and timed agendas
+- [`decks/token-optimization-context-engineering.pptx`](decks/token-optimization-context-engineering.pptx) - primary workshop delivery deck with embedded speaker notes
+- [`decks/token-optimization-context-engineering.executive.pptx`](decks/token-optimization-context-engineering.executive.pptx) - executive briefing visual variant
+- [`decks/token-optimization-context-engineering.technical.pptx`](decks/token-optimization-context-engineering.technical.pptx) - technical deep dive visual variant
+- [`decks/token-optimization-context-engineering.outline.md`](decks/token-optimization-context-engineering.outline.md) - editable delivery deck outline
+- [`decks/token-optimization-context-engineering.speaker-notes.md`](decks/token-optimization-context-engineering.speaker-notes.md) - speaker notes for the delivery deck
+- [`tools/generate_context_deck.py`](tools/generate_context_deck.py) - regenerates the styled PPTX variants from the Markdown sources
+- [`resources/copilot-surface-matrix.md`](resources/copilot-surface-matrix.md) - living reference for Copilot surfaces and context controls
+- [`resources/context-inventory-worksheet.md`](resources/context-inventory-worksheet.md)
+- [`resources/instruction-diet-worksheet.md`](resources/instruction-diet-worksheet.md)
+- [`resources/customer-preflight-checklist.md`](resources/customer-preflight-checklist.md)
+- [`resources/monday-morning-checklist.md`](resources/monday-morning-checklist.md)
+- [`templates/README.md`](templates/README.md) - copy/paste starter Copilot customization files
+- [`exercises/README.md`](exercises/README.md) - track-specific hands-on exercises
+- [`facilitator/delivery-guide.md`](facilitator/delivery-guide.md)
+- [`labs/00-foundations.md`](labs/00-foundations.md)
+- [`labs/01-ide-context-and-prompt-flow.md`](labs/01-ide-context-and-prompt-flow.md)
+- [`labs/02-ide-instructions-tools-and-mcp.md`](labs/02-ide-instructions-tools-and-mcp.md)
+- [`labs/03-cli-context-and-tool-output.md`](labs/03-cli-context-and-tool-output.md)
+- [`labs/04-cli-agents-tools-and-cost-control.md`](labs/04-cli-agents-tools-and-cost-control.md)
+- [`labs/05-github-web-context-and-coding-agent.md`](labs/05-github-web-context-and-coding-agent.md)
+- [`labs/06-github-code-review-and-pr-hygiene.md`](labs/06-github-code-review-and-pr-hygiene.md)
+- [`labs/07-measurement-billing-and-governance.md`](labs/07-measurement-billing-and-governance.md)
+- [`labs/08-applied-repo-review-and-adoption.md`](labs/08-applied-repo-review-and-adoption.md)
+- [`labs/MIGRATION.md`](labs/MIGRATION.md)
+
+## Regenerating the deck
+
+The PPTX files are generated from the Markdown outline and speaker notes so content remains easy to review.
+
+```powershell
+python -m pip install -r requirements-dev.txt
+python tools\generate_context_deck.py
+```
+
+The generator produces workshop, executive briefing, and technical deep dive variants. All generated PPTX files include embedded PowerPoint speaker notes from `decks/token-optimization-context-engineering.speaker-notes.md`.
diff --git a/decks/Token Optimization.pptx b/decks/Token Optimization.pptx
new file mode 100644
index 0000000..f50e64c
Binary files /dev/null and b/decks/Token Optimization.pptx differ
diff --git a/decks/token-optimization-context-engineering.executive.pptx b/decks/token-optimization-context-engineering.executive.pptx
new file mode 100644
index 0000000..4909862
Binary files /dev/null and b/decks/token-optimization-context-engineering.executive.pptx differ
diff --git a/decks/token-optimization-context-engineering.outline.md b/decks/token-optimization-context-engineering.outline.md
new file mode 100644
index 0000000..aab62c4
--- /dev/null
+++ b/decks/token-optimization-context-engineering.outline.md
@@ -0,0 +1,172 @@
+# Token Optimization and Context Engineering across GitHub Copilot
+
+Source outline for the delivery deck. Keep this file easy to diff; regenerate the `.pptx` delivery artifacts after major edits with `python tools\generate_context_deck.py`.
+
+## Slide 1: Title
+
+- Token Optimization and Context Engineering
+- Getting more value from every Copilot interaction
+- Across Copilot CLI, VS Code, GitHub.com, coding agent, and code review
+
+## Slide 2: Why this matters now
+
+- Usage models are shifting from simple request counting toward more granular accounting.
+- Long conversations, broad tool output, and unnecessary context can affect cost, latency, and quality.
+- The durable habit is not "use less Copilot"; it is "send better context."
+
+## Slide 3: Context engineering
+
+- Context engineering means bringing the right information, in the right format, to the model.
+- It turns ad hoc prompting into repeatable workflows.
+- Core primitives: custom instructions, prompt files, skills, agents, retrieval, and human review gates.
+
+## Slide 4: What counts as context
+
+- Product and system instructions
+- Repository, organization, path-specific, personal, and agent guidance
+- Conversation history and summaries
+- Files, selections, issues, pull requests, and tool results
+- Retrieved docs, web pages, MCP output, and generated plans
+
+## Slide 5: Important billing nuance
+
+- "Context is resent" is a useful mental model, not a universal invoice formula.
+- Some products and models use caching or product-specific accounting.
+- The practical takeaway remains: stale or irrelevant context increases cost, latency, and confusion.
+
+## Slide 6: Where context gets wasted
+
+- Whole directories or large files when one function matters
+- Long-running mixed-topic sessions
+- Overgrown custom instructions
+- Raw logs, build output, generated files, and tool noise
+- High-cost models used for routine work
+- Auto-review or agent tasks with vague scope
+
+## Slide 7: The five levers
+
+- Context hygiene
+- Prompt discipline
+- Model and surface selection
+- Scope and tool control
+- Measurement
+
+## Slide 8: Lever 1 - Context hygiene
+
+- Start fresh when the task changes.
+- Summarize before switching focus.
+- Keep stable source-of-truth docs short and current.
+- Avoid re-discovery loops by preserving useful handoffs.
+
+## Slide 9: Lever 2 - Prompt discipline
+
+- Use Markdown structure.
+- State outcome, scope, constraints, and success criteria.
+- Reference specific files, issues, PRs, or selections.
+- Add validation gates before risky edits.
+
+## Slide 10: Lever 3 - Model and surface selection
+
+- Choose the cheapest model or surface that can reliably finish the job.
+- Use VS Code Plan for design, Agent for implementation, Ask for exploration.
+- Use CLI when you need visible context/tool control.
+- Use code review when the unit of work is a pull request.
+
+## Slide 11: Lever 4 - Scope and tool control
+
+- Keep workspace, repository, and tool scope tight.
+- Use targeted instructions instead of one giant instruction file.
+- Enable MCP tools only when the task needs them.
+- Use human-in-the-loop approval for high-risk tool actions.
+
+## Slide 12: Lever 5 - Measurement
+
+- Measure tokens where exposed.
+- Measure premium requests, review counts, and billing views where tokens are hidden.
+- Measure quality with retries, false positives, PR churn, and human rework.
+- Optimize after you have a baseline.
+
+## Slide 13: Surface matrix
+
+- CLI: most visible token/context controls.
+- VS Code: daily coding workflow with Ask, Plan, Agent, custom instructions, prompt files, and review.
+- GitHub.com: repo, issue, PR, and discussion context.
+- Coding/cloud agent: asynchronous implementation from scoped tasks.
+- Code review: PR-focused feedback with product-specific constraints.
+
+## Slide 14: VS Code pattern
+
+- Curate project context with concise instructions and docs.
+- Plan first for complex work.
+- Implement from the plan in a fresh or focused session.
+- Review changes against the plan.
+
+## Slide 15: GitHub.com web pattern
+
+- Ask from the page that already has the relevant context.
+- Keep threads focused.
+- Use repository, issue, and PR context deliberately.
+- Treat generated files as drafts.
+
+## Slide 16: Copilot CLI pattern
+
+- Use sessions like branches: one task, one focused context.
+- Filter tool output before it enters the conversation.
+- Delegate noisy discovery when available.
+- Use usage/context visibility to teach the token mental model.
+
+## Slide 17: Coding agent pattern
+
+- Write issues like implementation briefs.
+- Include acceptance criteria, validation commands, and files to avoid.
+- Keep tasks small enough to review.
+- Review the generated PR like any other teammate's work.
+
+## Slide 18: Code review pattern
+
+- Keep PRs small.
+- Tune repo and path-specific review instructions.
+- Watch automatic review policy and quota implications.
+- Validate Copilot findings; review comments are not approvals.
+
+## Slide 19: Context inventory exercise
+
+- List every context source.
+- Mark required, useful, stale, redundant, sensitive, or unknown.
+- Decide what stays, what moves, what gets summarized, and what gets removed.
+
+## Slide 20: Instruction diet exercise
+
+- Keep stable rules always-on.
+- Move targeted rules to path-specific instructions.
+- Move repeated workflows to prompts, skills, or agents.
+- Link long docs instead of copying them into instructions.
+
+## Slide 21: Governance
+
+- Content exclusion
+- Model access policies
+- Code review settings
+- Budgets and alerts
+- Telemetry and dashboard ownership
+
+## Slide 22: Delivery tracks
+
+- 1 hour: mental model, surface view, demo, checklist
+- 2 hours: practitioner lab and prompt/context refactor
+- 4 hours: customer environment review and team operating model
+
+## Slide 23: What improvement looks like
+
+- Fewer irrelevant tokens
+- Faster answers
+- Fewer retries
+- Higher-signal reviews
+- Clearer ownership of policy and measurement
+
+## Slide 24: Takeaways
+
+- Context is a design input, not a dumping ground.
+- Token optimization and context engineering improve both cost and quality.
+- Surface controls differ; the habits transfer.
+- Start with three changes this week and measure.
diff --git a/decks/token-optimization-context-engineering.pptx b/decks/token-optimization-context-engineering.pptx
new file mode 100644
index 0000000..2fc2545
Binary files /dev/null and b/decks/token-optimization-context-engineering.pptx differ
diff --git a/decks/token-optimization-context-engineering.speaker-notes.md b/decks/token-optimization-context-engineering.speaker-notes.md
new file mode 100644
index 0000000..c815230
--- /dev/null
+++ b/decks/token-optimization-context-engineering.speaker-notes.md
@@ -0,0 +1,99 @@
+# Speaker Notes: Token Optimization and Context Engineering
+
+These notes are embedded into the generated PPTX variants by `python tools\generate_context_deck.py`.
+
+## Slide 1
+
+Open by saying this is not a "use less Copilot" session. It is a "get better answers with less waste" session.
+
+## Slide 2
+
+Anchor on the current shift: customers need practical habits before usage surprises become trust issues.
+
+## Slide 3
+
+Define context engineering as an evolution of prompt engineering. The goal is not clever phrasing; the goal is the right information in the right format.
+
+## Slide 4
+
+Walk left to right through the context inputs. Point out that the user prompt is often the smallest part.
+
+## Slide 5
+
+Use careful language. Product accounting can vary and caching may apply. The workshop teaches habits that remain useful even as billing details change.
+
+## Slide 6
+
+Emphasize that context waste hurts quality before it hurts the bill. Too much irrelevant context makes the model less focused.
+
+## Slide 7
+
+This is the map for the workshop. Do not create a second framework; map context engineering into these five levers.
+
+## Slide 8
+
+Use the branch analogy: one focused session per task. Clearing context is good between tasks, but clearing and re-reading the same files can be wasteful.
+
+## Slide 9
+
+Show one overloaded request and one structured request. Keep the demo short.
+
+## Slide 10
+
+Make model and surface selection practical. The right answer may be a different surface, not just a different model.
+
+## Slide 11
+
+Explain that every enabled tool is a capability and a possible source of noise. Tool descriptions and access boundaries are product UX.
+
+## Slide 12
+
+For surfaces without token visibility, measure proxies: retries, PR churn, review false positives, usage dashboards, and time-to-merge.
+
+## Slide 13
+
+Warn that the matrix is not parity. Some cells are intentionally "not available."
+
+## Slide 14
+
+VS Code is the daily practitioner path: Ask to understand, Plan to structure, Agent to implement, Review to validate.
+
+## Slide 15
+
+GitHub.com is strongest when the page context matters: repository, issue, pull request, or discussion.
+
+## Slide 16
+
+CLI remains the best live demo for token concepts because context and usage are visible.
+
+## Slide 17
+
+Coding agent tasks should look like good implementation tickets. Vague tasks create broad exploration and broad diffs.
+
+## Slide 18
+
+Code review is purpose-built. Do not promise model switching. Teach PR hygiene and instruction hygiene.
+
+## Slide 19
+
+Students should discover that some of their "helpful" context is stale, sensitive, or redundant.
+
+## Slide 20
+
+This is the most actionable demo. Trim one bloated instruction file and show where the pieces should move.
+
+## Slide 21
+
+Separate developer habits from admin controls. Admin controls compound because they apply every turn for every user.
+
+## Slide 22
+
+Match the track to the audience. Do not run the 4-hour customer review without preflight.
+
+## Slide 23
+
+Avoid unsupported savings promises. Talk about directional improvements and measurement.
+
+## Slide 24
+
+Close with a small commitment: pick three habits, measure for a week, then expand.
diff --git a/decks/token-optimization-context-engineering.technical.pptx b/decks/token-optimization-context-engineering.technical.pptx
new file mode 100644
index 0000000..82e4eed
Binary files /dev/null and b/decks/token-optimization-context-engineering.technical.pptx differ
diff --git a/exercises/01-vscode-context-attachments/README.md b/exercises/01-vscode-context-attachments/README.md
new file mode 100644
index 0000000..c24a20c
--- /dev/null
+++ b/exercises/01-vscode-context-attachments/README.md
@@ -0,0 +1,26 @@
+# Exercise 01: VS Code Context Attachments
+
+## Goal
+
+Practice replacing broad IDE context with deliberate file, selection, and test context.
+
+## Task
+
+Investigate a login bug in a sample project.
+
+## Baseline
+
+Read [`naive-transcript.md`](naive-transcript.md) and identify context that is stale, redundant, or too broad.
+
+## Engineered flow
+
+Read [`engineered-transcript.md`](engineered-transcript.md) and compare:
+
+- Which files are attached?
+- Which context is omitted?
+- Which mode is used first?
+- What validation gate is added?
+
+## Output
+
+Write one prompt for Ask mode, one for Plan mode, and one for Agent mode.
diff --git a/exercises/01-vscode-context-attachments/engineered-transcript.md b/exercises/01-vscode-context-attachments/engineered-transcript.md
new file mode 100644
index 0000000..32967a9
--- /dev/null
+++ b/exercises/01-vscode-context-attachments/engineered-transcript.md
@@ -0,0 +1,12 @@
+# Engineered transcript
+
+User:
+
+> In `src/auth/session.ts`, explain why expired sessions are not rejected. Use the selected failing test and the stack trace below. Do not inspect unrelated directories. Return a diagnosis and patch plan before editing.
+
+Likely result:
+
+- Specific file and test context.
+- Clear scope boundary.
+- Plan before implementation.
+- Validation is known before edits begin.
diff --git a/exercises/01-vscode-context-attachments/naive-transcript.md b/exercises/01-vscode-context-attachments/naive-transcript.md
new file mode 100644
index 0000000..9561040
--- /dev/null
+++ b/exercises/01-vscode-context-attachments/naive-transcript.md
@@ -0,0 +1,12 @@
+# Naive transcript
+
+User:
+
+> Fix the login bug in this project. Use the whole repo if needed. Also check if anything else looks wrong.
+
+Likely result:
+
+- Broad codebase search.
+- Unrelated files enter context.
+- The assistant may mix discovery, implementation, and review.
+- Success criteria are unclear.
diff --git a/exercises/02-vscode-instructions-stack/README.md b/exercises/02-vscode-instructions-stack/README.md
new file mode 100644
index 0000000..79d3616
--- /dev/null
+++ b/exercises/02-vscode-instructions-stack/README.md
@@ -0,0 +1,21 @@
+# Exercise 02: VS Code Instructions Stack
+
+## Goal
+
+Split a bloated instruction file into targeted Copilot primitives.
+
+## Starting point
+
+Imagine one always-on instruction file contains coding style, frontend accessibility, test conventions, release steps, docs style, and MCP guidance.
+
+## Steps
+
+1. Keep only stable repo-wide rules in `.github/copilot-instructions.md`.
+2. Move frontend guidance to `.github/instructions/frontend.instructions.md`.
+3. Move test guidance to `.github/instructions/tests.instructions.md`.
+4. Move repeated workflows to `.github/prompts/*.prompt.md`.
+5. Move tool boundaries to a chat mode or MCP README.
+
+## Output
+
+Use the files in [`../../templates`](../../templates/README.md) to create a minimal instruction stack.
diff --git a/exercises/03-cli-session-scope/README.md b/exercises/03-cli-session-scope/README.md
new file mode 100644
index 0000000..767d610
--- /dev/null
+++ b/exercises/03-cli-session-scope/README.md
@@ -0,0 +1,27 @@
+# Exercise 03: CLI Session Scope
+
+## Goal
+
+Turn a noisy CLI troubleshooting session into a focused handoff.
+
+## Starting point
+
+Use a transcript or notes from a session that mixed discovery, failed commands, implementation, and review.
+
+## Steps
+
+1. Mark which findings are still true.
+2. Remove failed guesses and unrelated command output.
+3. Write a five-line handoff summary.
+4. Start the next request with the handoff instead of the full transcript.
+5. Add the next validation command.
+
+## Output
+
+```markdown
+Task:
+Current state:
+Important files:
+Decisions already made:
+What I need next:
+```
diff --git a/exercises/04-cli-agent-tool-control/README.md b/exercises/04-cli-agent-tool-control/README.md
new file mode 100644
index 0000000..dbd013c
--- /dev/null
+++ b/exercises/04-cli-agent-tool-control/README.md
@@ -0,0 +1,21 @@
+# Exercise 04: CLI Agent and Tool Control
+
+## Goal
+
+Decide what should stay in the main CLI session and what should be delegated.
+
+## Scenario
+
+You need to update a feature but do not know the exact files. The repo has tests, docs, generated files, and deployment config.
+
+## Steps
+
+1. Define the task in one sentence.
+2. List tools that are needed and tools that are not needed.
+3. Delegate read-only discovery with explicit files or folders to avoid.
+4. Ask for a concise summary instead of raw search output.
+5. Decide whether implementation should be direct or delegated.
+
+## Output
+
+A bounded agent prompt with scope, constraints, validation, and approval gates.
diff --git a/exercises/05-github-coding-agent-scope/README.md b/exercises/05-github-coding-agent-scope/README.md
new file mode 100644
index 0000000..63c4a62
--- /dev/null
+++ b/exercises/05-github-coding-agent-scope/README.md
@@ -0,0 +1,21 @@
+# Exercise 05: GitHub Coding Agent Scope
+
+## Goal
+
+Convert a vague issue into a coding-agent-ready implementation brief.
+
+## Scenario
+
+Use a sandbox repository or customer-approved demo repository. TODO(cody): confirm the standard demo org and fallback repository for live delivery.
+
+## Steps
+
+1. Start with a vague issue title and description.
+2. Add the desired behavior.
+3. Add non-goals and files to avoid.
+4. Add acceptance criteria.
+5. Add validation commands and review expectations.
+
+## Output
+
+An issue body that a reviewer could use to judge whether the generated PR is in scope.
diff --git a/exercises/06-github-code-review-hygiene/README.md b/exercises/06-github-code-review-hygiene/README.md
new file mode 100644
index 0000000..59dccab
--- /dev/null
+++ b/exercises/06-github-code-review-hygiene/README.md
@@ -0,0 +1,17 @@
+# Exercise 06: GitHub Code Review Hygiene
+
+## Goal
+
+Improve Copilot review signal by improving PR shape and review instructions.
+
+## Steps
+
+1. Start with a PR that mixes unrelated changes.
+2. Split the change into one reviewable goal.
+3. Rewrite the PR description with problem, approach, risk, and validation.
+4. Draft a path-specific review instruction.
+5. Decide which comments require human review before merge.
+
+## Output
+
+A review-ready PR description and a short review-instruction snippet.
diff --git a/exercises/07-spaces-vs-adhoc-prompts/README.md b/exercises/07-spaces-vs-adhoc-prompts/README.md
new file mode 100644
index 0000000..f4b4749
--- /dev/null
+++ b/exercises/07-spaces-vs-adhoc-prompts/README.md
@@ -0,0 +1,21 @@
+# Exercise 07: Spaces vs Ad Hoc Prompts
+
+## Goal
+
+Compare curated context with ad hoc pasted context.
+
+## Scenario
+
+You need to answer a cross-repository architecture question.
+
+## Steps
+
+1. List the repositories and docs that are truly needed.
+2. Decide which sources belong in a Space.
+3. Write one ad hoc prompt that pastes too much context.
+4. Write one Space-backed prompt that asks the same question.
+5. Compare answer quality, source traceability, and likely context waste.
+
+## Output
+
+A curated-source plan for one recurring team question.
diff --git a/exercises/08-monday-morning-audit/README.md b/exercises/08-monday-morning-audit/README.md
new file mode 100644
index 0000000..793e2c6
--- /dev/null
+++ b/exercises/08-monday-morning-audit/README.md
@@ -0,0 +1,19 @@
+# Exercise 08: Monday Morning Audit
+
+## Goal
+
+Leave the workshop with three low-risk changes to apply to a real project.
+
+## Steps
+
+1. Pick one repository or workflow.
+2. Inventory instructions, prompts, tools, MCP servers, review settings, and usage dashboards.
+3. Mark each item as keep, trim, move, measure, or remove.
+4. Choose three changes that can be completed without a governance redesign.
+5. Assign an owner and measurement signal for each change.
+
+## Output
+
+| Change | Owner | Measurement signal | Due |
+| --- | --- | --- | --- |
+| | | | |
diff --git a/exercises/README.md b/exercises/README.md
new file mode 100644
index 0000000..3585156
--- /dev/null
+++ b/exercises/README.md
@@ -0,0 +1,16 @@
+# Exercises
+
+These exercises support the use-case lab tracks.
+
+| Exercise | Track |
+| --- | --- |
+| [`01-vscode-context-attachments`](01-vscode-context-attachments/README.md) | VS Code/IDE |
+| [`02-vscode-instructions-stack`](02-vscode-instructions-stack/README.md) | VS Code/IDE |
+| [`03-cli-session-scope`](03-cli-session-scope/README.md) | GitHub Copilot CLI |
+| [`04-cli-agent-tool-control`](04-cli-agent-tool-control/README.md) | GitHub Copilot CLI |
+| [`05-github-coding-agent-scope`](05-github-coding-agent-scope/README.md) | GitHub.com |
+| [`06-github-code-review-hygiene`](06-github-code-review-hygiene/README.md) | GitHub.com/code review |
+| [`07-spaces-vs-adhoc-prompts`](07-spaces-vs-adhoc-prompts/README.md) | GitHub.com/governance |
+| [`08-monday-morning-audit`](08-monday-morning-audit/README.md) | Shared closeout |
+
+Each exercise compares a naive workflow with a context-engineered workflow. Use a sandbox or customer-approved repository for live delivery.
diff --git a/facilitator/delivery-guide.md b/facilitator/delivery-guide.md
new file mode 100644
index 0000000..60d56f8
--- /dev/null
+++ b/facilitator/delivery-guide.md
@@ -0,0 +1,90 @@
+# Facilitator Delivery Guide
+
+## Positioning
+
+This workshop teaches token optimization and context engineering across GitHub Copilot surfaces. The delivery deck provides the narrative; the labs and worksheets provide the hands-on path.
+
+Use Copilot CLI as the reference implementation because it exposes context, tools, subagents, and usage most visibly. Then generalize the same principles to VS Code, GitHub.com, Copilot coding/cloud agent, and Copilot code review.
+
+## Artifacts
+
+| Artifact | Use |
+| --- | --- |
+| `decks/token-optimization-context-engineering.pptx` | Primary workshop presentation with embedded speaker notes |
+| `decks/token-optimization-context-engineering.executive.pptx` | Executive briefing visual variant |
+| `decks/token-optimization-context-engineering.technical.pptx` | Technical deep dive visual variant |
+| `decks/token-optimization-context-engineering.outline.md` | Source outline for deck edits |
+| `decks/token-optimization-context-engineering.speaker-notes.md` | Presenter notes embedded into generated PPTX files |
+| `tools/generate_context_deck.py` | Regenerates all deck variants from the Markdown sources |
+| `labs/README.md` | Track selection and student entry point |
+| `templates/README.md` | Copy/paste starter Copilot customization files |
+| `exercises/README.md` | Track-specific hands-on exercises |
+| `resources/copilot-surface-matrix.md` | Living surface reference |
+| `resources/*-worksheet.md` | Hands-on and customer review worksheets |
+
+## Deck format options
+
+- Use the workshop deck for the standard 1, 2, and 4-hour deliveries.
+- Use the executive briefing variant when the audience needs a cleaner leadership narrative before hands-on material.
+- Use the technical deep dive variant when the room is mostly engineers and the discussion will focus on controls, workflows, and review mechanics.
+- Regenerate all variants after deck edits with `python tools\generate_context_deck.py`.
+
+## Delivery tracks
+
+### 1 hour: awareness and demo
+
+Best for leaders, technical leads, and mixed audiences.
+
+1. Run lab `00` for the shared mental model.
+2. Pick one track demo: VS Code/IDE, GHCP CLI, or GitHub.com/code review.
+3. Show the surface estimator in the sample app.
+4. End with lab `08` and the Monday-morning checklist.
+
+Avoid deep product configuration. Keep the call to action practical.
+
+### 2 hours: practitioner workshop
+
+Best for developers and enablement teams.
+
+1. Teach lab `00`.
+2. Run one complete track bundle: `01-02`, `03-04`, or `05-06`.
+3. Run the matching exercise from `exercises/`.
+4. Use lab `07` to connect cost and quality signals.
+5. Capture commitments with lab `08`.
+
+### 4 hours: applied environment review
+
+Best for teams that can inspect their own repositories.
+
+1. Confirm preflight and safety rules.
+2. Teach lab `00`.
+3. Rotate through labs `01` through `06` or split learners by track.
+4. Review billing, governance, and measurement with lab `07`.
+5. Share anonymized findings.
+6. Build a 30-day operating model with lab `08`.
+
+Use a fallback public repository if the customer environment is not ready.
+
+## Demo guidance
+
+- Have screenshots or a recorded fallback for live demos.
+- Do not rely on a billing or admin page being available in the room.
+- Do not project proprietary source unless the customer explicitly approves it.
+- Keep CLI demos short and visible: context, usage, compacting, and filtered tool output.
+- In VS Code, show Ask, Plan, and Agent as different context shapes, not just different buttons.
+- For code review, emphasize that Copilot review supplements human review and does not replace approval.
+
+## Claims to verify before delivery
+
+Date-stamp product-specific claims in slides or notes:
+
+- Billing and usage model details
+- Model availability and model switching
+- Code review instruction limits
+- Code review quota behavior
+- Coding/cloud agent availability
+- MCP and subagent availability by surface
+
+## Facilitation tone
+
+The goal is not "use less AI." The goal is to use AI deliberately, get better answers, and remove waste.
diff --git a/index.html b/index.html
index 456dea9..2c07691 100644
--- a/index.html
+++ b/index.html
@@ -22,6 +22,7 @@
Enterprise
Cost center
User budgets
+ Surface calculator
Request budget
diff --git a/labs/00-foundations.md b/labs/00-foundations.md
new file mode 100644
index 0000000..7797c60
--- /dev/null
+++ b/labs/00-foundations.md
@@ -0,0 +1,65 @@
+# Lab 00: Foundations for Every Copilot Surface
+
+## Concept
+
+Token optimization and context engineering are the same discipline with two scoreboards: cost and quality. The goal is not to use less Copilot. The goal is to send the right context, in the right shape, to the right surface, for the current task.
+
+Every Copilot workflow has context inputs:
+
+- Product and system instructions
+- Repository, organization, personal, and path-specific instructions
+- Conversation history, summaries, and memory
+- Files, selections, issues, pull requests, and tool output
+- Retrieved docs, web pages, MCP results, and generated plans
+- The current user request and the model response
+
+More context is not automatically better. Irrelevant context increases latency, cost exposure, and answer confusion. Some products and models may use caching, summarization, or product-specific accounting, so "repeated context" is a mental model rather than a universal invoice formula. The durable habit is still useful: remove stale context, keep scope clear, and measure outcomes.
+
+## Surface mechanics
+
+The same five levers transfer across VS Code, GitHub Copilot CLI, GitHub.com, coding agent, and code review:
+
+| Lever | Cost lens | Quality lens |
+| --- | --- | --- |
+| Context hygiene | Avoid sending stale or irrelevant tokens | Keep the model focused on the task |
+| Prompt discipline | Avoid broad prompts that trigger broad discovery | State outcome, scope, constraints, and validation |
+| Model and surface routing | Use the cheapest reliable path | Pick the surface that already has the right context |
+| Scope and tool control | Limit tool calls and retrieved content | Reduce noise, risk, and accidental writes |
+| Measurement | Find usage, request, and cost signals | Track retries, rework, review noise, and time saved |
+
+## Levers in practice
+
+Start with one task and ask:
+
+1. What outcome do I need?
+2. What context is required?
+3. What context is merely convenient?
+4. Which Copilot surface has the cleanest path?
+5. What signal will tell me whether the workflow improved?
+
+For example, "review this repo and fix auth" is too broad. A better starting point is: "In `src/auth/session.ts`, identify why expired sessions are not rejected. Use the failing test output below. Do not inspect unrelated directories. Return a short diagnosis and patch plan before editing."
+
+## Hands-on
+
+Use [`../exercises/08-monday-morning-audit/README.md`](../exercises/08-monday-morning-audit/README.md) if you need a reusable worksheet.
+
+1. Pick one real or sample Copilot task.
+2. List every context source you would normally include.
+3. Mark each source as required, useful, stale, redundant, sensitive, or unknown.
+4. Rewrite the request with only the required and useful context.
+5. Add one validation gate before implementation.
+
+## Checklist
+
+- I can explain why context bloat affects both cost and quality.
+- I can name the five token optimization levers.
+- I can separate required context from convenient context.
+- I can choose a surface based on task shape.
+- I can identify one measurement signal before optimizing.
+
+## Sources
+
+- https://github.blog/news-insights/company-news/github-copilot-is-moving-to-usage-based-billing/
+- https://docs.github.com/en/copilot/managing-copilot/managing-copilot-as-an-individual-subscriber/about-billing-for-github-copilot
+- https://code.visualstudio.com/docs/copilot/overview
+- https://docs.github.com/en/copilot/concepts/context/spaces
diff --git a/labs/01-context-management-basics.md b/labs/01-context-management-basics.md
deleted file mode 100644
index bd5b012..0000000
--- a/labs/01-context-management-basics.md
+++ /dev/null
@@ -1,46 +0,0 @@
-# Chapter 1: Context Management Basics
-
-## Core idea
-
-Every AI request has a context budget. More context can help, but irrelevant context increases cost, latency, and confusion. Good context management is the discipline of giving the model enough information to succeed and no more.
-
-## Common waste patterns
-
-- Pasting entire files when only one function matters.
-- Asking broad questions before defining the task.
-- Keeping old chat history after the topic changes.
-- Loading too many tools, MCP servers, memories, or instruction files.
-- Using a high-capacity model for simple search, rewrite, or formatting work.
-
-## Practical context checklist
-
-Before sending a request, ask:
-
-1. What outcome do I want?
-2. What files, functions, logs, or examples are truly required?
-3. What constraints should the model follow?
-4. What can be omitted because it is stale, unrelated, or discoverable?
-5. Should this be a new chat/session?
-
-## Demo
-
-Start with an overloaded prompt:
-
-> Review this repository and improve the auth flow. Here are 8 files, the full README, a stack trace, old chat history, and unrelated requirements.
-
-Refactor it:
-
-> In `src/auth/session.ts`, identify why expired sessions are not rejected. Use the failing test output below. Do not modify unrelated files. Return a concise diagnosis and patch plan first.
-
-## Hands-on lab
-
-1. Pick a real task from the sample repo.
-2. Write the largest prompt you might naturally send.
-3. Remove irrelevant history, files, and requirements.
-4. Add explicit success criteria.
-5. Compare expected token usage, clarity, and answer quality.
-
-## Facilitator notes
-
-- Emphasize that "less context" does not mean "less information." It means higher signal.
-- Encourage attendees to separate discovery, implementation, and review into different turns.
diff --git a/labs/01-ide-context-and-prompt-flow.md b/labs/01-ide-context-and-prompt-flow.md
new file mode 100644
index 0000000..309aaa6
--- /dev/null
+++ b/labs/01-ide-context-and-prompt-flow.md
@@ -0,0 +1,54 @@
+# Lab 01: VS Code/IDE Track - Context and Prompt Flow
+
+## Concept
+
+VS Code is where many developers feel Copilot cost and quality most directly because the assistant sits next to code, selections, files, tests, terminals, and workspace context. Good IDE usage starts with deliberate context attachment and a clear mode choice.
+
+The learner goal is simple: choose the smallest context shape that can answer the question or complete the change.
+
+## Surface mechanics
+
+VS Code exposes several context shapes:
+
+- **Ask** for explanation, search, and read-only reasoning.
+- **Edit** for targeted changes in known files or selections.
+- **Agent** for multi-step implementation where tool use is worth the extra context and requests.
+- **Plan** for design and sequencing before implementation.
+- Explicit context such as `#selection`, `#file`, open editors, pasted logs, and `#codebase`.
+
+The expensive pattern is not "Agent mode is bad." The expensive pattern is using broad agentic context before the task is clear. Start narrow, then escalate when the task requires broader search or multi-file edits.
+
+## Levers
+
+| Lever | IDE habit |
+| --- | --- |
+| Context hygiene | Start new chats when the task changes; remove stale attachments |
+| Prompt discipline | Name the outcome, files, constraints, and success criteria |
+| Model and surface routing | Use Ask for discovery, Plan for design, Edit/Agent for implementation |
+| Scope and tool control | Prefer selected code and specific files before `#codebase` |
+| Measurement | Track retries, broad file reads, failed edits, and manual rework |
+
+## Hands-on
+
+Use [`../exercises/01-vscode-context-attachments/README.md`](../exercises/01-vscode-context-attachments/README.md).
+
+1. Start with a broad request: "Fix the login bug in this project."
+2. Rewrite it for Ask mode using a file, selection, or failing test.
+3. Rewrite it for Plan mode with acceptance criteria.
+4. Rewrite it for Agent mode only after the plan is clear.
+5. Compare which version would likely read the most context.
+
+## Checklist
+
+- I can explain Ask, Edit, Agent, and Plan in cost/quality terms.
+- I can decide when `#codebase` is justified.
+- I can use selection or file context before workspace context.
+- I can split discovery, planning, implementation, and review.
+- I can name one retry or rework signal to watch.
+
+## Sources
+
+- https://github.blog/ai-and-ml/github-copilot/copilot-ask-edit-and-agent-modes-what-they-do-and-when-to-use-them/
+- https://code.visualstudio.com/docs/copilot/overview
+- https://code.visualstudio.com/docs/copilot/chat/copilot-chat
+- https://code.visualstudio.com/docs/copilot/reference/copilot-settings
diff --git a/labs/02-ide-instructions-tools-and-mcp.md b/labs/02-ide-instructions-tools-and-mcp.md
new file mode 100644
index 0000000..f64eac0
--- /dev/null
+++ b/labs/02-ide-instructions-tools-and-mcp.md
@@ -0,0 +1,52 @@
+# Lab 02: VS Code/IDE Track - Instructions, Tools, and MCP
+
+## Concept
+
+Persistent customization should make Copilot more predictable without turning every request into a giant prompt. In VS Code, the main optimization is deciding what belongs in always-on instructions, what belongs in path-scoped guidance, what belongs in prompt files or chat modes, and what should be retrieved only when needed.
+
+## Surface mechanics
+
+Use a layered instruction stack:
+
+- `.github/copilot-instructions.md` for short, stable repository guidance.
+- `.github/instructions/*.instructions.md` for path-scoped or file-type-specific rules.
+- `.github/prompts/*.prompt.md` for repeated task workflows.
+- `.github/chatmodes/*.chatmode.md` for mode-level tool and behavior boundaries.
+- `.vscode/mcp.json` for workspace-scoped MCP servers.
+
+Do not put every standard, policy, and workflow into one always-on file. That adds context to every request, even when the task does not need it.
+
+## Levers
+
+| Lever | IDE customization habit |
+| --- | --- |
+| Context hygiene | Keep always-on instructions short and current |
+| Prompt discipline | Move repeatable workflows into prompt files |
+| Model and surface routing | Use chat modes to steer tool and model behavior |
+| Scope and tool control | Prefer workspace MCP config over broad user-level tools |
+| Measurement | Compare retries before and after instruction changes |
+
+## Hands-on
+
+Use [`../exercises/02-vscode-instructions-stack/README.md`](../exercises/02-vscode-instructions-stack/README.md).
+
+1. Start with a bloated instruction file.
+2. Keep only stable repository rules in the root instruction file.
+3. Move file-type rules into `.instructions.md` files.
+4. Move repeated workflows into `.prompt.md` files.
+5. Keep one MCP server in workspace scope and document why it exists.
+
+## Checklist
+
+- I can separate repo, path, prompt, chat mode, and MCP guidance.
+- I can keep always-on instructions reviewable.
+- I can explain why a workspace MCP server is enabled.
+- I can remove a rule that cannot be observed in output.
+- I can test whether the instruction stack changes behavior.
+
+## Sources
+
+- https://code.visualstudio.com/docs/copilot/customization/custom-instructions
+- https://code.visualstudio.com/docs/copilot/customization/prompt-files
+- https://code.visualstudio.com/docs/copilot/customization/custom-chat-modes
+- https://code.visualstudio.com/docs/copilot/customization/mcp-servers
diff --git a/labs/02-instructions-and-agent-customizations.md b/labs/02-instructions-and-agent-customizations.md
deleted file mode 100644
index df1e4fc..0000000
--- a/labs/02-instructions-and-agent-customizations.md
+++ /dev/null
@@ -1,52 +0,0 @@
-# Chapter 2: Instructions and Agent Customizations
-
-## Core idea
-
-Instructions, custom agents, memory, repository guidance, and tool configuration shape every response. They should be intentional, short, current, and testable.
-
-## What to manage
-
-- Organization or team instructions.
-- Repository instructions such as coding standards and security rules.
-- Agent definitions and specialized roles.
-- Personal preferences and saved memories.
-- Prompt templates and reusable task checklists.
-
-## Good instruction patterns
-
-- Put stable rules in shared documentation.
-- Put task-specific constraints in the current prompt.
-- Keep instructions short enough that developers can review them.
-- Use positive, concrete rules: "Prefer small diffs" instead of "Do not make huge changes."
-- Include source-of-truth links rather than copying long policy text.
-
-## Bad instruction patterns
-
-- Conflicting rules across personal, repo, and org scopes.
-- Large copied policies that crowd out task context.
-- Hidden agent behavior that developers cannot explain.
-- Outdated examples that no longer match the repository.
-
-## Demo
-
-Compare these two instructions:
-
-Poor:
-
-> Always write perfect code, follow all best practices, be secure, be concise, be helpful, and optimize everything.
-
-Better:
-
-> For this repository, make minimal changes, preserve public APIs, include tests for behavior changes, and explain any security tradeoff before implementing it.
-
-## Hands-on lab
-
-1. Draft a repository instruction file for the sample repo.
-2. Limit it to 10-15 bullets.
-3. Separate rules into: coding style, testing, security, documentation, and review.
-4. Remove any rule that cannot be observed in a diff or answer.
-5. Ask an AI assistant to perform a small task with and without the instruction set.
-
-## Customer relationship message
-
-Instruction management is a shared responsibility. We help customers reduce usage and improve outcomes by making customization visible, understandable, and maintainable.
diff --git a/labs/03-cli-context-and-tool-output.md b/labs/03-cli-context-and-tool-output.md
new file mode 100644
index 0000000..7f9fd97
--- /dev/null
+++ b/labs/03-cli-context-and-tool-output.md
@@ -0,0 +1,51 @@
+# Lab 03: GHCP CLI Track - Session Context and Tool Output
+
+## Concept
+
+GitHub Copilot CLI is the clearest surface for teaching context management because the user can see commands, tool output, session boundaries, and summaries. The optimization habit is to keep the main session focused and prevent noisy terminal output from becoming permanent conversation context.
+
+## Surface mechanics
+
+Treat CLI sessions like branches:
+
+- One session per task or closely related set of tasks.
+- Summarize before switching focus.
+- Filter logs, test output, and search results before putting them in the conversation.
+- Prefer targeted file reads and searches over dumping directories or full command output.
+- Use handoffs when the next step should continue without all previous noise.
+
+Commands, stack traces, build logs, and search results can be useful. They can also dominate the context window and make the assistant chase irrelevant details.
+
+## Levers
+
+| Lever | CLI habit |
+| --- | --- |
+| Context hygiene | Start fresh when the task changes; compact or summarize at boundaries |
+| Prompt discipline | Ask for a diagnosis or plan before broad edits |
+| Model and surface routing | Use CLI when tool visibility and session control matter |
+| Scope and tool control | Run narrow searches and suppress verbose output |
+| Measurement | Watch tool calls, repeated searches, retries, and context growth |
+
+## Hands-on
+
+Use [`../exercises/03-cli-session-scope/README.md`](../exercises/03-cli-session-scope/README.md).
+
+1. Start with a noisy troubleshooting transcript.
+2. Identify stale assumptions and irrelevant command output.
+3. Rewrite the next request with only the useful findings.
+4. Add file and command boundaries.
+5. Write a five-line handoff summary for a fresh session.
+
+## Checklist
+
+- I can decide when a CLI session should continue or restart.
+- I can summarize before changing task focus.
+- I can filter command output before sharing it with the model.
+- I can keep search and file reads targeted.
+- I can preserve useful decisions without carrying all prior noise.
+
+## Sources
+
+- https://docs.github.com/en/copilot/using-github-copilot/using-github-copilot-in-the-command-line
+- https://docs.github.com/en/copilot/using-github-copilot/using-github-copilot-cli
+- https://github.blog/news-insights/company-news/github-copilot-is-moving-to-usage-based-billing/
diff --git a/labs/03-mcp-and-tool-optimization.md b/labs/03-mcp-and-tool-optimization.md
deleted file mode 100644
index 470e322..0000000
--- a/labs/03-mcp-and-tool-optimization.md
+++ /dev/null
@@ -1,45 +0,0 @@
-# Chapter 3: MCP and Tool Optimization
-
-## Core idea
-
-Tools and MCP servers can make agents more capable, but every enabled tool adds selection overhead, security considerations, and possible latency. Enable the smallest useful tool set for the task.
-
-## Tool selection principles
-
-- Prefer local repository search for code questions.
-- Prefer official APIs for authoritative external data.
-- Disable broad or experimental tools unless the task requires them.
-- Use read-only tools by default; enable write tools only when needed.
-- Document what each MCP server is for, who owns it, and what data it can access.
-
-## MCP hygiene checklist
-
-- Is this server needed for the current workflow?
-- Does it expose sensitive data?
-- Does it support least-privilege access?
-- Are tool names and descriptions clear enough for the model to choose correctly?
-- Are rate limits, costs, and audit logs understood?
-- Is there a fallback if the server is unavailable?
-
-## Demo
-
-Run the same task with two configurations:
-
-1. Too many tools: issue tracker, browser, database, cloud logs, file system, shell, and package registry.
-2. Focused tools: repository search and issue tracker only.
-
-Discuss differences in speed, safety, and answer focus.
-
-## Hands-on lab
-
-1. Choose a task: fix a bug, summarize an issue, or update docs.
-2. List every tool the agent could use.
-3. Remove tools that are not needed.
-4. Define read/write boundaries.
-5. Write a one-paragraph tool policy for the task.
-
-## Practical recommendations
-
-- Create task-based tool profiles: "docs only," "code edit," "incident read-only," and "release prep."
-- Review MCP server access quarterly.
-- Treat tool descriptions as product UX: clear descriptions reduce mistaken tool calls.
diff --git a/labs/04-chat-session-management.md b/labs/04-chat-session-management.md
deleted file mode 100644
index a8d09ee..0000000
--- a/labs/04-chat-session-management.md
+++ /dev/null
@@ -1,51 +0,0 @@
-# Chapter 4: Chat Session Management
-
-## Core idea
-
-A chat session is a working context, not a permanent project record. Start new sessions when the task changes, summarize before switching, and avoid carrying stale history.
-
-## When to continue a session
-
-- You are iterating on the same bug, file, or design.
-- The prior turns contain decisions still relevant to the task.
-- The model needs continuity to avoid repeating work.
-
-## When to start a new session
-
-- You changed goals.
-- The conversation contains failed experiments or outdated assumptions.
-- You are moving from discovery to implementation or from implementation to review.
-- The model starts referencing irrelevant prior context.
-
-## Session handoff template
-
-Use this when starting fresh:
-
-```markdown
-Task:
-Current state:
-Important files:
-Decisions already made:
-Constraints:
-What I need next:
-```
-
-## Demo
-
-Show a long conversation where the assistant keeps solving an old problem. Then start a fresh session with a five-line handoff and compare output quality.
-
-## Hands-on lab
-
-1. Take a messy chat transcript or simulated conversation.
-2. Identify stale assumptions.
-3. Write a concise handoff summary.
-4. Continue from the handoff in a new session.
-
-## Recommended habit
-
-End important sessions with:
-
-- What changed?
-- What remains?
-- What decisions were made?
-- What should the next session know?
diff --git a/labs/04-cli-agents-tools-and-cost-control.md b/labs/04-cli-agents-tools-and-cost-control.md
new file mode 100644
index 0000000..a28ce55
--- /dev/null
+++ b/labs/04-cli-agents-tools-and-cost-control.md
@@ -0,0 +1,53 @@
+# Lab 04: GHCP CLI Track - Agents, Tools, and Cost Control
+
+## Concept
+
+Agentic CLI workflows are powerful because they can search, edit, test, delegate, and use MCP tools. They are expensive or risky when the user delegates vague work, enables too many tools, or lets raw discovery output flood the main session.
+
+The optimization question is not "agent or no agent." It is "which work should stay in the main session, which work should be delegated, and what summary should come back?"
+
+## Surface mechanics
+
+Use a small decision tree:
+
+1. Do the task directly when the files and commands are known.
+2. Delegate read-only discovery when the search space is large.
+3. Use specialist agents for bounded work with clear success criteria.
+4. Keep MCP tools least-privilege and task-specific.
+5. Require approval for risky writes, dependency changes, secrets, infrastructure, and production configuration.
+
+The main session should receive concise findings, patches, and validation results. It should not receive every intermediate log line unless the log is the evidence.
+
+## Levers
+
+| Lever | CLI agent habit |
+| --- | --- |
+| Context hygiene | Return summaries, not raw exploration dumps |
+| Prompt discipline | Give agents task scope, constraints, and validation |
+| Model and surface routing | Escalate only when deeper reasoning or autonomy is needed |
+| Scope and tool control | Enable only the tools needed for the task |
+| Measurement | Compare direct work, delegated work, retries, and final diff size |
+
+## Hands-on
+
+Use [`../exercises/04-cli-agent-tool-control/README.md`](../exercises/04-cli-agent-tool-control/README.md).
+
+1. Pick a task with unknown files.
+2. Decide what discovery can be delegated read-only.
+3. Define tool boundaries and files to avoid.
+4. Ask for a summary before implementation.
+5. Decide whether to proceed directly or delegate the patch.
+
+## Checklist
+
+- I can decide when delegation reduces main-session context.
+- I can write a bounded agent task.
+- I can restrict tools and file scope.
+- I can ask for summaries instead of full logs.
+- I can require human approval for risky operations.
+
+## Sources
+
+- https://docs.github.com/en/copilot/using-github-copilot/using-github-copilot-in-the-command-line
+- https://docs.github.com/en/copilot/concepts/context/model-context-protocol
+- https://docs.github.com/en/copilot/managing-copilot/managing-copilot-as-an-individual-subscriber/about-billing-for-github-copilot
diff --git a/labs/05-github-web-context-and-coding-agent.md b/labs/05-github-web-context-and-coding-agent.md
new file mode 100644
index 0000000..2f3b41e
--- /dev/null
+++ b/labs/05-github-web-context-and-coding-agent.md
@@ -0,0 +1,54 @@
+# Lab 05: GitHub.com Track - Web Context and Coding Agent
+
+## Concept
+
+GitHub.com already has useful context: repositories, issues, pull requests, discussions, files, and Copilot Spaces. The optimization habit is to start from the page that contains the right context and to write scoped tasks when handing work to coding agent.
+
+Use a sandbox or customer-approved demo organization. Do not assume a real customer org is safe to project. TODO(cody): confirm the standard demo organization or fallback repository before live delivery.
+
+## Surface mechanics
+
+Common GitHub.com context paths:
+
+- Repository pages for file and architecture questions.
+- Issues for scoped implementation tasks.
+- Pull requests for review and follow-up questions.
+- Copilot Spaces for curated cross-repo or document-backed knowledge.
+- Coding agent for async implementation when the issue is clear enough to produce a reviewable PR.
+
+A vague issue creates broad exploration, broad diffs, and harder review. A good coding-agent issue looks like a small implementation brief: goal, files in scope, files to avoid, acceptance criteria, validation commands, and reviewer expectations.
+
+## Levers
+
+| Lever | GitHub.com habit |
+| --- | --- |
+| Context hygiene | Ask from the page that already contains relevant context |
+| Prompt discipline | Write issues as implementation briefs |
+| Model and surface routing | Use coding agent for scoped async work, not vague exploration |
+| Scope and tool control | Keep Spaces sources curated and issues narrow |
+| Measurement | Watch generated PR size, review cycles, retries, and time to merge |
+
+## Hands-on
+
+Use [`../exercises/05-github-coding-agent-scope/README.md`](../exercises/05-github-coding-agent-scope/README.md).
+
+1. Start with a vague issue.
+2. Add goal, constraints, files in scope, and files to avoid.
+3. Add acceptance criteria and validation commands.
+4. Add review expectations.
+5. Decide whether the task is small enough for coding agent.
+
+## Checklist
+
+- I can pick the GitHub.com page that gives Copilot the right context.
+- I can write a scoped coding-agent issue.
+- I can avoid projecting sensitive repositories in workshops.
+- I can use Spaces for curated context instead of ad hoc dumping.
+- I can review generated PRs like teammate work.
+
+## Sources
+
+- https://docs.github.com/en/copilot/using-github-copilot/coding-agent/about-copilot-coding-agent
+- https://github.blog/changelog/2025-09-25-copilot-coding-agent-is-now-generally-available/
+- https://docs.github.com/en/copilot/concepts/context/spaces
+- https://github.blog/changelog/2025-05-29-introducing-copilot-spaces-a-new-way-to-work-with-code-and-context/
diff --git a/labs/05-model-choice.md b/labs/05-model-choice.md
deleted file mode 100644
index 71d8504..0000000
--- a/labs/05-model-choice.md
+++ /dev/null
@@ -1,41 +0,0 @@
-# Chapter 5: Model Choice
-
-## Core idea
-
-Model choice is a cost-quality-latency decision. Use the smallest model that reliably handles the task, and escalate only when the task requires deeper reasoning, larger context, or higher reliability.
-
-## Suggested routing
-
-| Task | Recommended model style |
-| --- | --- |
-| Simple rewrite, formatting, naming | Fast/low-cost model |
-| Code search explanation | Fast or standard model |
-| Localized bug fix | Standard coding model |
-| Cross-file architecture change | Stronger reasoning model |
-| Security-sensitive review | Stronger reasoning model plus human review |
-| Long-context synthesis | Long-context model, but only after trimming inputs |
-
-## Escalation triggers
-
-- The model misses constraints after a clear prompt.
-- The task spans many files or systems.
-- The answer requires careful tradeoff analysis.
-- The cost of a wrong answer is high.
-- You need a second opinion for design, security, or migration work.
-
-## Anti-patterns
-
-- Using the most expensive model for every turn.
-- Using a tiny model for complex design and then spending more turns correcting it.
-- Switching models without summarizing the current state.
-
-## Hands-on lab
-
-1. Pick three tasks: simple, medium, complex.
-2. Decide the initial model for each.
-3. Define when you would escalate.
-4. Record whether the first model was sufficient.
-
-## Facilitator note
-
-The goal is not always the cheapest single request. The goal is the lowest total cost for a correct and useful outcome.
diff --git a/labs/06-chat-history-and-memory.md b/labs/06-chat-history-and-memory.md
deleted file mode 100644
index 9f9c485..0000000
--- a/labs/06-chat-history-and-memory.md
+++ /dev/null
@@ -1,44 +0,0 @@
-# Chapter 6: Chat History and Memory Strategies
-
-## Core idea
-
-Memory can improve continuity, but unmanaged memory can introduce stale preferences, hidden assumptions, and unnecessary context. Treat memory as durable configuration, not a dumping ground.
-
-## Types of memory
-
-- Personal preferences: communication style, preferred tools, recurring workflows.
-- Repository conventions: testing commands, architecture rules, coding standards.
-- Project decisions: active design choices and migration constraints.
-- Session summaries: temporary handoff notes.
-
-## What belongs in memory
-
-- Stable facts used repeatedly.
-- Rules that are hard to infer from one file.
-- Team conventions that prevent mistakes.
-
-## What does not belong in memory
-
-- Temporary debugging guesses.
-- Secrets, credentials, or customer-sensitive data.
-- One-time task details.
-- Opinions that may conflict with team standards.
-
-## Memory review checklist
-
-- Is it still true?
-- Is it scoped correctly: personal, repo, team, or organization?
-- Does it cite a source?
-- Would another developer understand why it exists?
-- Could it cause the model to ignore the current prompt?
-
-## Hands-on lab
-
-1. Write five candidate memories from a sample project.
-2. Keep only the ones that are stable and reusable.
-3. Rewrite them as short, source-backed facts.
-4. Decide whether each belongs at personal, repo, or team scope.
-
-## Practical recommendation
-
-Use summaries for temporary continuity and memory for durable knowledge.
diff --git a/labs/06-github-code-review-and-pr-hygiene.md b/labs/06-github-code-review-and-pr-hygiene.md
new file mode 100644
index 0000000..8f44060
--- /dev/null
+++ b/labs/06-github-code-review-and-pr-hygiene.md
@@ -0,0 +1,53 @@
+# Lab 06: GitHub.com/Code Review Track - PR and Review Hygiene
+
+## Concept
+
+Copilot code review is a PR-shaped workflow. It works best when the pull request is small, the description is clear, and repository instructions are concise. It works poorly when a PR mixes unrelated changes or when review instructions are bloated, stale, or too broad.
+
+Teach code review as a purpose-built review surface, not as a generic chat prompt.
+
+## Surface mechanics
+
+Important review habits:
+
+- Keep pull requests small and focused.
+- Explain intent, risk, and validation in the PR description.
+- Keep repository and path-specific instructions concise.
+- Treat Copilot comments as feedback, not approval.
+- Review automatic review settings so draft-heavy or push-heavy repositories do not generate avoidable noise.
+
+As of 2026-05-12, GitHub Docs should be re-verified before delivery for model availability, review instruction limits, automatic review behavior, and quota implications.
+
+## Levers
+
+| Lever | Code review habit |
+| --- | --- |
+| Context hygiene | Keep diffs small and remove unrelated files |
+| Prompt discipline | Use PR descriptions to explain intent and validation |
+| Model and surface routing | Use review for PR feedback, not broad implementation planning |
+| Scope and tool control | Use path-specific review instructions where possible |
+| Measurement | Track false positives, repeated comments, PR churn, and time to merge |
+
+## Hands-on
+
+Use [`../exercises/06-github-code-review-hygiene/README.md`](../exercises/06-github-code-review-hygiene/README.md).
+
+1. Start with a broad PR description.
+2. Add problem, approach, risk, and validation.
+3. Identify unrelated files that should move to another PR.
+4. Draft concise review instructions for one file area.
+5. Decide which findings require human review before merge.
+
+## Checklist
+
+- I can explain why PR size affects review quality.
+- I can write a review-ready PR description.
+- I can keep review instructions short and targeted.
+- I can separate Copilot feedback from required approval.
+- I can measure review signal instead of only review volume.
+
+## Sources
+
+- https://docs.github.com/en/copilot/using-github-copilot/code-review/using-copilot-code-review
+- https://docs.github.com/en/copilot/customizing-copilot/adding-repository-custom-instructions-for-github-copilot
+- https://docs.github.com/en/copilot/managing-copilot/configuring-and-auditing-content-exclusion
diff --git a/labs/07-measurement-billing-and-governance.md b/labs/07-measurement-billing-and-governance.md
new file mode 100644
index 0000000..b64a97f
--- /dev/null
+++ b/labs/07-measurement-billing-and-governance.md
@@ -0,0 +1,56 @@
+# Lab 07: Measurement, Billing, and Governance
+
+## Concept
+
+Optimization is not complete until a team can see whether the change helped. Some surfaces expose token or context signals. Others expose request counts, billing summaries, review counts, dashboard data, or indirect quality signals. Teach teams to build a map of evidence instead of promising one universal dashboard.
+
+## Surface mechanics
+
+Common signals:
+
+| Surface or client | Signals to inspect |
+| --- | --- |
+| GHCP CLI | Usage/context indicators, trace output, repeated tool calls |
+| VS Code/IDE | Chat history, mode choice, retries, plan vs agent usage |
+| GitHub.com/coding agent | Generated PR size, issue quality, review cycles, billing pages |
+| Code review | Review quota, false positives, repeated comments, PR churn |
+| Model provider or gateway | Token counts, latency, cost, identity mapping |
+| GitHub billing | Premium requests, AI credits, budgets, alerts, policy settings |
+
+Do not shame high-usage users. Pair usage with business value and quality. A power user building high-value automation may be using Copilot well; a low-usage workflow with many retries may still need improvement.
+
+## Levers
+
+| Lever | Measurement habit |
+| --- | --- |
+| Context hygiene | Compare context size, retries, and stale-history incidents |
+| Prompt discipline | Compare before/after task completion and rework |
+| Model and surface routing | Track when stronger models reduce total retries |
+| Scope and tool control | Watch unnecessary tool calls and broad file reads |
+| Measurement | Set a baseline before making policy changes |
+
+## Hands-on
+
+Use [`../exercises/07-spaces-vs-adhoc-prompts/README.md`](../exercises/07-spaces-vs-adhoc-prompts/README.md) for curated-context measurement, or use the sample app calculator for per-surface scenarios.
+
+1. List every AI client the team uses.
+2. Map each client to identity, owner, billing source, and dashboard.
+3. Pick one high-value workflow.
+4. Define one cost signal and one quality signal.
+5. Decide what action would be taken if the signal changes.
+
+## Checklist
+
+- I can map Copilot usage to an owner and dashboard.
+- I can avoid unsupported billing claims.
+- I can pair cost signals with quality signals.
+- I can explain budgets without becoming the spend police.
+- I can decide what to measure before changing defaults.
+
+## Sources
+
+- https://github.blog/news-insights/company-news/github-copilot-is-moving-to-usage-based-billing/
+- https://github.com/features/copilot/plans
+- https://docs.github.com/en/copilot/managing-copilot/managing-copilot-as-an-individual-subscriber/about-billing-for-github-copilot
+- https://docs.github.com/en/billing/how-tos/set-up-budgets
+- https://docs.github.com/en/copilot/managing-copilot/configuring-and-auditing-content-exclusion
diff --git a/labs/07-usage-and-billing-visibility.md b/labs/07-usage-and-billing-visibility.md
deleted file mode 100644
index 94aee9c..0000000
--- a/labs/07-usage-and-billing-visibility.md
+++ /dev/null
@@ -1,53 +0,0 @@
-# Chapter 7: Usage and Billing Visibility
-
-## Core idea
-
-Customers need clear ways to understand monthly AI usage across clients. The exact source of truth depends on the product, plan, identity provider, and whether the usage flows through GitHub, a model provider, a cloud account, or an internal gateway.
-
-## What to show attendees
-
-- Where individual users can see plan, entitlement, and usage indicators.
-- Where organization administrators can see aggregate usage and billing.
-- How usage differs between IDE extensions, web chat, CLI tools, API keys, and MCP-backed agents.
-- Why a single developer may have usage in multiple systems.
-
-## Common places to check
-
-| Client or path | Usage source to inspect |
-| --- | --- |
-| GitHub Copilot in IDEs | GitHub user/org/enterprise Copilot settings, usage metrics, and billing views available for the plan |
-| GitHub.com chat or coding agents | GitHub account, organization, or enterprise usage and billing pages |
-| Model provider API keys | Provider dashboard usage and invoices |
-| Azure OpenAI or cloud-hosted models | Cloud cost management, resource metrics, and deployment logs |
-| Internal AI gateway | Gateway logs, chargeback reports, and identity mapping |
-| Third-party developer tools | Vendor admin console and invoices |
-
-## Monthly usage conversation guide
-
-1. Identify every AI client in use.
-2. Map each client to an identity: personal account, org account, service principal, or API key.
-3. Map each identity to a billing source.
-4. Compare high-usage workflows with business value.
-5. Agree on enablement actions before imposing restrictions.
-
-## Customer relationship message
-
-After billing model changes, customers need transparency and practical controls. The best conversation is not "use less AI"; it is "use AI deliberately, measure outcomes, and remove waste."
-
-## Hands-on lab
-
-Create a usage map:
-
-```markdown
-Client:
-Users:
-Authentication path:
-Billing source:
-Usage dashboard:
-Owner:
-Optimization action:
-```
-
-## Facilitator note
-
-Avoid promising one universal dashboard unless the customer has implemented one. Instead, help them build a reliable map of systems and owners.
diff --git a/labs/08-ai-evals-and-observability.md b/labs/08-ai-evals-and-observability.md
deleted file mode 100644
index 451d33c..0000000
--- a/labs/08-ai-evals-and-observability.md
+++ /dev/null
@@ -1,49 +0,0 @@
-# Chapter 8: AI Evals and Observability
-
-## Core idea
-
-Token optimization should be measured. AI evals help teams compare prompts, instructions, models, memories, and tool configurations using repeatable examples instead of anecdotes.
-
-## Candidate eval platform: W&B Weave
-
-Consider W&B Weave for tracing, prompt and model comparison, qualitative review, and lightweight observability. If a customer already uses another eval platform, keep the workshop tool-agnostic and focus on repeatable datasets, rubrics, and decision criteria.
-
-## What to evaluate
-
-- Correctness: did the assistant solve the task?
-- Groundedness: did it use the supplied repository facts?
-- Cost: how many tokens, requests, and tool calls were needed?
-- Latency: how long did the workflow take?
-- Safety: did it avoid secrets, unsafe commands, or policy violations?
-- Developer experience: was the answer actionable?
-
-## Recommended tools to consider
-
-- W&B Weave: tracing, prompt/version comparison, human review workflows.
-- promptfoo: lightweight prompt and model regression testing.
-- LangSmith: tracing, datasets, and eval workflows for LangChain-based systems.
-- OpenAI Evals or provider-native eval tools: model and prompt comparison.
-- Azure AI Evaluation: useful for Azure-hosted AI workflows.
-- Ragas or DeepEval: evaluation patterns for retrieval-augmented generation.
-- Custom GitHub Actions or CI checks: simple regression suites for prompts and agent instructions.
-
-## Minimal eval dataset
-
-Start with 10-20 examples:
-
-- 5 common coding tasks.
-- 5 documentation or explanation tasks.
-- 3 security or policy-sensitive tasks.
-- 3 tool-use tasks.
-- 2 failure cases where the model should ask clarifying questions.
-
-## Hands-on lab
-
-1. Select three representative prompts.
-2. Run each with two instruction sets or two models.
-3. Score outputs from 1-5 on correctness, usefulness, and cost.
-4. Decide which change should become the new default.
-
-## Practical recommendation
-
-Use evals to justify changes to model routing, instruction files, MCP configuration, and memory strategy.
diff --git a/labs/08-applied-repo-review-and-adoption.md b/labs/08-applied-repo-review-and-adoption.md
new file mode 100644
index 0000000..0d40950
--- /dev/null
+++ b/labs/08-applied-repo-review-and-adoption.md
@@ -0,0 +1,61 @@
+# Lab 08: Applied Repo Review and Adoption
+
+## Concept
+
+The 4-hour track should help customers inspect their own environment safely. The goal is not to expose proprietary code in the room. The goal is to give each participant a structured way to identify context waste, instruction bloat, tool risk, review noise, and measurement gaps.
+
+Use a customer-approved repository or a sandbox fallback. TODO(cody): confirm the standard sandbox organization and fallback repository for GitHub.com hands-on delivery.
+
+## Surface mechanics
+
+Use these resources:
+
+- [`../resources/customer-preflight-checklist.md`](../resources/customer-preflight-checklist.md)
+- [`../resources/context-inventory-worksheet.md`](../resources/context-inventory-worksheet.md)
+- [`../resources/instruction-diet-worksheet.md`](../resources/instruction-diet-worksheet.md)
+- [`../resources/monday-morning-checklist.md`](../resources/monday-morning-checklist.md)
+
+Review areas:
+
+| Area | What to inspect | Common action |
+| --- | --- | --- |
+| Instructions | Repo, path, prompt, chat mode, agent, and review guidance | Trim, split, or clarify |
+| Source of truth | Architecture docs, standards, issue templates, PR templates | Link concise docs instead of copying long guidance |
+| Tool and MCP setup | Enabled servers, tool descriptions, read/write access | Remove unused tools and document ownership |
+| Surface routing | IDE, CLI, web, coding agent, and review habits | Match surface to task shape |
+| Measurement | Usage pages, budgets, evals, PR review counts | Identify baseline and owner |
+
+## Levers
+
+The adoption loop is:
+
+1. Baseline one workflow.
+2. Apply one context or routing change.
+3. Measure cost and quality signals.
+4. Keep the change only if it improves the outcome.
+5. Share the pattern with the team.
+
+## Hands-on
+
+Use [`../exercises/08-monday-morning-audit/README.md`](../exercises/08-monday-morning-audit/README.md).
+
+1. Pick one repository or workflow.
+2. Complete the context inventory worksheet.
+3. Complete the instruction diet worksheet for one instruction file or workflow.
+4. Identify three low-risk improvements.
+5. Convert the findings into a 30-day adoption plan.
+
+## Checklist
+
+- I can run the audit without exposing sensitive source.
+- I can pick a safe fallback repository.
+- I can identify three low-risk improvements.
+- I can assign owners for measurement and governance.
+- I can turn workshop findings into a 30-day plan.
+
+## Sources
+
+- https://docs.github.com/en/copilot/managing-copilot/configuring-and-auditing-content-exclusion
+- https://docs.github.com/en/copilot/concepts/context/spaces
+- https://docs.github.com/en/copilot/using-github-copilot/coding-agent/about-copilot-coding-agent
+- https://docs.github.com/en/copilot/using-github-copilot/code-review/using-copilot-code-review
diff --git a/labs/09-ideal-workshop-repo.md b/labs/09-ideal-workshop-repo.md
deleted file mode 100644
index 1a9c0aa..0000000
--- a/labs/09-ideal-workshop-repo.md
+++ /dev/null
@@ -1,61 +0,0 @@
-# Chapter 9: Ideal External Workshop Repository
-
-## Goal
-
-The workshop repository should be safe, realistic, and small enough for attendees to understand quickly. It should demonstrate context management tradeoffs without exposing customer code.
-
-## Recommended characteristics
-
-- Public or easily shareable with external attendees.
-- Uses a familiar stack such as TypeScript, Python, .NET, or Java.
-- Contains 5-15 source files, 3-8 tests, and clear documentation.
-- Includes realistic issues: a bug, a docs gap, a small refactor, and a test failure.
-- Has enough structure for code search and tool use to matter.
-- Avoids secrets, production data, private endpoints, or proprietary algorithms.
-- Includes a small issue backlog for lab prompts.
-
-## Suggested structure
-
-```text
-sample-workshop-repo/
- README.md
- CONTRIBUTING.md
- docs/
- src/
- tests/
- issues/
- 01-bug.md
- 02-docs-update.md
- 03-refactor.md
- 04-eval-case.md
- prompts/
- baseline-prompts.md
- improved-prompts.md
- evals/
- dataset.jsonl
- rubric.md
-```
-
-## Built-in lab scenarios
-
-- Fix a localized bug with minimal context.
-- Improve a prompt by removing irrelevant files.
-- Compare model choices for a simple and complex task.
-- Decide which tools/MCP servers are necessary.
-- Create a session handoff summary.
-- Run a small eval against two prompt variants.
-
-## Repository README should include
-
-- Setup instructions.
-- Known lab tasks.
-- Expected time per task.
-- Safety note that the repo contains no real secrets.
-- Guidance for resetting to the starting state.
-
-## Optional enhancements
-
-- Add intentionally noisy files to teach context filtering.
-- Add a simulated usage report for billing discussions.
-- Add a lightweight eval dataset with expected outcomes.
-- Add role cards for developer, team lead, platform admin, and security reviewer.
diff --git a/labs/10-next-steps-and-extra-topics.md b/labs/10-next-steps-and-extra-topics.md
deleted file mode 100644
index 320f3a1..0000000
--- a/labs/10-next-steps-and-extra-topics.md
+++ /dev/null
@@ -1,65 +0,0 @@
-# Chapter 10: Next Steps and Additional Topics
-
-## Team next steps
-
-Ask attendees to commit to three actions:
-
-1. Create or refresh repository instructions.
-2. Define model and tool routing guidance for common tasks.
-3. Review monthly usage and identify one high-value optimization opportunity.
-
-## 30-day adoption plan
-
-### Week 1: Baseline
-
-- Map AI clients and usage sources.
-- Collect common prompts and workflows.
-- Identify the top three waste patterns.
-
-### Week 2: Standardize
-
-- Publish instruction guidance.
-- Define chat session and handoff practices.
-- Create task-based tool profiles.
-
-### Week 3: Measure
-
-- Build a small eval dataset.
-- Compare prompt, model, and tool configurations.
-- Review usage and quality signals together.
-
-### Week 4: Scale
-
-- Share examples with other teams.
-- Add governance for MCP servers and memory.
-- Schedule a monthly review of usage, quality, and developer experience.
-
-## Recommended additional topics
-
-- Prompt injection and tool safety.
-- Secrets handling and data boundaries.
-- Retrieval and document chunking strategies.
-- Agent review and approval workflows.
-- Accessibility and inclusive AI-assisted development.
-- Incident response for unsafe or expensive AI behavior.
-- Team enablement metrics: satisfaction, cycle time, rework, and defect rate.
-- Change management after pricing or billing updates.
-
-## Customer relationship closing
-
-Position the workshop as a partnership:
-
-- We want customers to get durable value, not surprise usage.
-- We will help teams understand where usage comes from.
-- We will share practical controls before recommending restrictions.
-- We will keep improving guidance as tools, models, and billing models evolve.
-
-## Reusable closing prompt
-
-```markdown
-Based on today's workshop, identify:
-1. One context habit I will change.
-2. One tool or MCP setting my team should review.
-3. One usage dashboard or report I need access to.
-4. One eval we should create before changing defaults.
-```
diff --git a/labs/MIGRATION.md b/labs/MIGRATION.md
new file mode 100644
index 0000000..ce7d265
--- /dev/null
+++ b/labs/MIGRATION.md
@@ -0,0 +1,19 @@
+# Lab Migration Map
+
+The curriculum was consolidated from thirteen chapter files into a maximum-nine-lab, use-case track architecture.
+
+| Previous lab | New destination |
+| --- | --- |
+| `00-token-optimization-and-context-engineering.md` | `00-foundations.md` |
+| `01-context-management-basics.md` | `00-foundations.md` |
+| `02-instructions-and-agent-customizations.md` | `02-ide-instructions-tools-and-mcp.md` |
+| `03-mcp-and-tool-optimization.md` | `02-ide-instructions-tools-and-mcp.md`, `04-cli-agents-tools-and-cost-control.md` |
+| `04-chat-session-management.md` | `01-ide-context-and-prompt-flow.md`, `03-cli-context-and-tool-output.md` |
+| `05-model-choice.md` | `04-cli-agents-tools-and-cost-control.md`, `07-measurement-billing-and-governance.md` |
+| `06-chat-history-and-memory.md` | `01-ide-context-and-prompt-flow.md`, `03-cli-context-and-tool-output.md` |
+| `07-usage-and-billing-visibility.md` | `07-measurement-billing-and-governance.md` |
+| `08-ai-evals-and-observability.md` | `07-measurement-billing-and-governance.md` |
+| `09-ideal-workshop-repo.md` | `08-applied-repo-review-and-adoption.md` |
+| `10-next-steps-and-extra-topics.md` | `08-applied-repo-review-and-adoption.md` |
+| `11-copilot-surfaces-and-context-boundaries.md` | `01` through `06`, by surface |
+| `12-customer-environment-review.md` | `08-applied-repo-review-and-adoption.md` |
diff --git a/labs/README.md b/labs/README.md
index 78ea9ea..4c6dfbd 100644
--- a/labs/README.md
+++ b/labs/README.md
@@ -1,79 +1,99 @@
-# Context Management and Token Optimization Workshop
+# Token Optimization and Context Engineering Workshop
-Customer-facing source material for a practical developer training session. The workshop can be delivered as a 1-hour demo, 2-hour workshop, or 4-hour hands-on lab.
+Customer-facing source material for practical developer training. The curriculum uses a shared foundation, three use-case tracks, and shared closeout labs so the same concepts can be taught without maintaining separate full curricula.
## Audience
-Developers, technical leads, platform engineers, engineering managers, and AI enablement teams who use AI coding assistants, chat clients, MCP servers, or agentic development tools.
+Developers, technical leads, platform engineers, engineering managers, and AI enablement teams who use GitHub Copilot in VS Code/IDEs, GitHub Copilot CLI, GitHub.com, coding agent, or code review.
-## Outcomes
+## Lab tracks
-Attendees will learn how to:
-
-- Reduce unnecessary context and token usage without reducing quality.
-- Choose the right model, tool, and session strategy for a task.
-- Manage instructions, customizations, memory, and chat history deliberately.
-- Evaluate whether workflow changes improve quality, speed, and cost.
-- Locate monthly usage and billing signals across common AI clients.
-- Apply these practices to future work projects.
-
-## Prerequisites
-
-- A GitHub account and access to an AI coding/chat tool.
-- A small sample repository with issues, tests, documentation, and a few realistic defects.
-- Optional: access to organization billing, Copilot usage, cloud AI usage, or model provider dashboards.
-- Optional: W&B Weave, LangSmith, promptfoo, OpenAI Evals, Azure AI Evaluation, or another eval/observability tool.
+| Learner use case | Run these labs |
+| --- | --- |
+| VS Code/IDE users | `00`, `01`, `02`, `07`, `08` |
+| GitHub Copilot CLI users | `00`, `03`, `04`, `07`, `08` |
+| GitHub.com/code review users | `00`, `05`, `06`, `07`, `08` |
+| Full cross-surface practitioner | `00` through `08` |
+
+## Labs
+
+| Lab | Track | Title |
+| --- | --- | --- |
+| [`00`](00-foundations.md) | Shared | Foundations for every Copilot surface |
+| [`01`](01-ide-context-and-prompt-flow.md) | VS Code/IDE | Context and prompt flow |
+| [`02`](02-ide-instructions-tools-and-mcp.md) | VS Code/IDE | Instructions, tools, and MCP |
+| [`03`](03-cli-context-and-tool-output.md) | GHCP CLI | Session context and tool output |
+| [`04`](04-cli-agents-tools-and-cost-control.md) | GHCP CLI | Agents, tools, and cost control |
+| [`05`](05-github-web-context-and-coding-agent.md) | GitHub.com | Web context and coding agent |
+| [`06`](06-github-code-review-and-pr-hygiene.md) | GitHub.com/code review | PR and review hygiene |
+| [`07`](07-measurement-billing-and-governance.md) | Shared | Measurement, billing, and governance |
+| [`08`](08-applied-repo-review-and-adoption.md) | Shared | Applied repo review and adoption |
+
+See [`MIGRATION.md`](MIGRATION.md) for the map from the previous 13-lab sequence to the consolidated track model.
## Delivery formats
-### 1-hour version: executive demo + guided practice
+### 1-hour awareness + demo
| Time | Topic |
| --- | --- |
-| 0:00-0:05 | Why token optimization matters after billing/usage model changes |
-| 0:05-0:15 | Context basics and common waste patterns |
-| 0:15-0:25 | Instructions, customizations, and tool/MCP hygiene |
-| 0:25-0:35 | Chat session management and model choice |
-| 0:35-0:45 | Usage visibility and monthly reporting |
-| 0:45-0:55 | Mini eval demo |
-| 0:55-1:00 | Next steps and team commitments |
+| 0:00-0:05 | Why token optimization and context engineering matter |
+| 0:05-0:20 | Lab `00`: shared mental model and five levers |
+| 0:20-0:40 | One selected use-case lab: `01`, `03`, or `05` |
+| 0:40-0:50 | Lab `07`: measurement and governance overview |
+| 0:50-1:00 | Lab `08`: Monday-morning checklist and commitments |
-Recommended chapters: 01, 02, 03, 05, 07, 08, 10.
+### 1-hour use-case track
-### 2-hour version: demo + hands-on workshop
+| Track | Labs |
+| --- | --- |
+| VS Code/IDE | `00`, `01`, `02`, `08` |
+| GHCP CLI | `00`, `03`, `04`, `08` |
+| GitHub.com/code review | `00`, `05`, `06`, `08` |
+
+Use the track-specific labs for the demo and keep measurement as a short facilitator discussion.
+
+### 2-hour practitioner workshop
| Time | Topic |
| --- | --- |
-| 0:00-0:10 | Goals, billing context, and customer trust |
-| 0:10-0:25 | Context management fundamentals |
-| 0:25-0:45 | Lab: trim context and improve a prompt |
-| 0:45-1:05 | Instructions, customizations, MCP, and tools |
-| 1:05-1:25 | Lab: design a lean agent/tool setup |
-| 1:25-1:40 | Model choice, memory, and chat session strategy |
-| 1:40-1:55 | Usage visibility and evals |
-| 1:55-2:00 | Next steps |
-
-Recommended chapters: 01 through 08 and 10.
+| 0:00-0:15 | Lab `00`: shared foundation |
+| 0:15-1:05 | One complete track bundle: IDE, CLI, or GitHub.com/code review |
+| 1:05-1:30 | Hands-on exercise for the selected track |
+| 1:30-1:45 | Lab `07`: measurement, billing, and governance |
+| 1:45-2:00 | Lab `08`: adoption plan |
-### 4-hour version: full lab
+### 4-hour applied environment review
| Time | Topic |
| --- | --- |
-| 0:00-0:15 | Workshop framing and repository walkthrough |
-| 0:15-0:45 | Context management fundamentals |
-| 0:45-1:15 | Lab: context audit and prompt refactor |
-| 1:15-1:45 | Instructions and agent customization design |
-| 1:45-2:15 | MCP/tool optimization lab |
-| 2:15-2:45 | Chat sessions, memory, and history lab |
-| 2:45-3:10 | Model choice and cost-quality tradeoffs |
-| 3:10-3:35 | Monthly usage visibility across clients |
-| 3:35-3:55 | AI evals and observability |
-| 3:55-4:00 | Next steps and commitment plan |
-
-Recommended chapters: all chapters.
+| 0:00-0:15 | Preflight and safety rules |
+| 0:15-0:45 | Lab `00`: shared foundation |
+| 0:45-1:55 | Labs `01` through `06`: surface rotation or breakout tracks |
+| 1:55-2:35 | Track-specific hands-on exercise |
+| 2:35-3:10 | Lab `07`: measurement, billing, and governance |
+| 3:10-3:50 | Lab `08`: customer or sandbox repo review |
+| 3:50-4:00 | Commitments and 30-day adoption plan |
+
+Use the customer preflight checklist and a fallback repository. Do not project proprietary source unless the customer explicitly approves it.
+
+## Student materials
+
+- [`../decks/token-optimization-context-engineering.pptx`](../decks/token-optimization-context-engineering.pptx) - primary workshop delivery deck with embedded speaker notes
+- [`../decks/token-optimization-context-engineering.executive.pptx`](../decks/token-optimization-context-engineering.executive.pptx) - executive briefing visual variant
+- [`../decks/token-optimization-context-engineering.technical.pptx`](../decks/token-optimization-context-engineering.technical.pptx) - technical deep dive visual variant
+- [`../decks/token-optimization-context-engineering.outline.md`](../decks/token-optimization-context-engineering.outline.md) - editable deck source
+- [`../resources/copilot-surface-matrix.md`](../resources/copilot-surface-matrix.md) - living surface reference
+- [`../templates/README.md`](../templates/README.md) - copy/paste starter customization files
+- [`../exercises/README.md`](../exercises/README.md) - track-specific hands-on exercises
+- [`../resources/context-inventory-worksheet.md`](../resources/context-inventory-worksheet.md)
+- [`../resources/instruction-diet-worksheet.md`](../resources/instruction-diet-worksheet.md)
+- [`../resources/customer-preflight-checklist.md`](../resources/customer-preflight-checklist.md)
+- [`../resources/monday-morning-checklist.md`](../resources/monday-morning-checklist.md)
## Suggested facilitation style
+- Pick one use-case track before the workshop unless the room is explicitly cross-surface.
- Show one bad example, one improved example, and one reusable checklist per topic.
- Keep the tone supportive: the goal is better outcomes, not blaming users for usage.
- Tie every recommendation to quality, security, cost, or developer experience.
diff --git a/requirements-dev.txt b/requirements-dev.txt
new file mode 100644
index 0000000..27ae79f
--- /dev/null
+++ b/requirements-dev.txt
@@ -0,0 +1,2 @@
+python-pptx==1.0.2
+markitdown==0.1.5
diff --git a/resources/context-inventory-worksheet.md b/resources/context-inventory-worksheet.md
new file mode 100644
index 0000000..9440766
--- /dev/null
+++ b/resources/context-inventory-worksheet.md
@@ -0,0 +1,43 @@
+# Context Inventory Worksheet
+
+Use this worksheet before a hands-on task or during the 4-hour customer environment review.
+
+## Task
+
+```markdown
+Task:
+Surface:
+Repository or project:
+Success criteria:
+Human approval needed before:
+```
+
+## Inventory
+
+| Context item | Source | Type | Needed? | Risk | Action |
+| --- | --- | --- | --- | --- | --- |
+| Example: `.github/copilot-instructions.md` | Repo | Always-on instruction | Yes | Long or stale | Trim to stable rules |
+| Example: build log | Terminal | Tool output | Maybe | Verbose | Summarize or grep first |
+| | | | | | |
+
+## Context types
+
+- Always-on instruction
+- Path-specific instruction
+- Prompt file or skill
+- Agent definition
+- Repository documentation
+- Issue or PR context
+- Selected code or file reference
+- Tool output
+- Retrieved documentation
+- Conversation history
+- Human review gate
+
+## Decisions
+
+1. What context is required?
+2. What context can be summarized?
+3. What context should stay out of the session?
+4. What should become a durable repo asset?
+5. What should be measured after the task?
diff --git a/resources/copilot-surface-matrix.md b/resources/copilot-surface-matrix.md
new file mode 100644
index 0000000..365f71c
--- /dev/null
+++ b/resources/copilot-surface-matrix.md
@@ -0,0 +1,24 @@
+# Copilot Surface Matrix
+
+Last verified: 2026-05-12. Copilot capabilities and billing details change frequently; verify surface-specific claims against current GitHub Docs before delivery.
+
+Use this as the living reference for the workshop. The deck should show a simplified version.
+
+| Surface | Best for | Context controls | Routing and model controls | Measurement visibility | Recommended habits |
+| --- | --- | --- | --- | --- | --- |
+| Copilot CLI | Token-visible agentic work, repo exploration, command-heavy tasks | Session boundaries, working directory, file references, tool allow/deny, subagents, summaries, content exclusion | Model switching when available; delegate discovery, tasks, review, and long-running work | CLI usage/context commands and traces when configured | Start one session per task, summarize before switching focus, avoid raw logs, delegate noisy work |
+| VS Code Copilot Chat | Day-to-day coding, selected code, planning, implementation, review | Selection, open files, workspace context, custom instructions, prompt files, custom agents, MCP tools, path-specific instructions | Ask/Plan/Agent modes; model picker when enabled; subagents when available | Used references, chat context indicators, code review comments, billing/usage views by plan | Use Ask for learning, Plan before complex edits, Agent for implementation, keep instructions short |
+| GitHub.com web chat | Repository, issue, pull request, and discussion context | Repository/issue/PR context, attachments, generated files, subthreads, personal/repo/org instructions | Model picker and response regeneration when available | GitHub usage and billing views by plan; less live token visibility | Ask from the page that has the relevant context, keep threads focused, move durable guidance into repo assets |
+| Copilot coding or cloud agent | Asynchronous implementation from issues or tasks | Issue body, acceptance criteria, linked files, repo instructions, path-specific instructions, tools configured for the agent | Agent profile and product defaults; model details may not be user-controlled | PRs, task outcomes, org/enterprise usage reporting where available | Write scoped issues, include validation commands, review generated PRs, require approval for risky changes |
+| Copilot code review | PR feedback and suggested fixes | PR diff, base branch instructions, path-specific instructions, repository knowledge, excluded files | Purpose-built review system; user model switching is not supported | Review request/quota and billing views by plan; review comments and false-positive rate | Keep PRs small, tune instructions, avoid unnecessary auto-review on draft-heavy workflows, validate findings |
+
+## What to do when a lever is unavailable
+
+- If a surface does not expose live token usage, measure indirectly with billing views, review counts, retry rates, and time-to-merge.
+- If a surface does not expose model switching, route the work to another surface or adjust the workflow scope.
+- If a surface does not expose tool controls, use repository instructions, issue templates, content exclusion, and human review gates.
+- If a workflow needs sensitive context, prefer local review, redaction, or a sandbox repository.
+
+## Secondary surfaces
+
+JetBrains, Visual Studio, Xcode, GitHub Mobile, and other IDEs support subsets of the same ideas. Teach them as variants unless the audience is centered on that tool.
diff --git a/resources/customer-preflight-checklist.md b/resources/customer-preflight-checklist.md
new file mode 100644
index 0000000..c44bde0
--- /dev/null
+++ b/resources/customer-preflight-checklist.md
@@ -0,0 +1,37 @@
+# Customer Environment Preflight Checklist
+
+Use this before running the 4-hour applied customer environment track.
+
+## People
+
+- [ ] Developers who can inspect the selected repository are attending.
+- [ ] A repository owner or technical lead is attending.
+- [ ] An admin or platform owner can answer Copilot policy, model, billing, and content exclusion questions.
+- [ ] Participants understand that proprietary findings should be shared only in anonymized form.
+
+## Repository
+
+- [ ] A safe repository or sandbox branch is selected.
+- [ ] The repository has realistic issues, tests, docs, and pull requests.
+- [ ] The team has permission to use the repository in the workshop.
+- [ ] A public fallback repository is ready.
+
+## Copilot access
+
+- [ ] Attendees can sign in to the relevant Copilot surfaces.
+- [ ] VS Code or preferred IDE is installed if hands-on IDE work is planned.
+- [ ] GitHub.com repository, issue, pull request, and code review access is confirmed.
+- [ ] Copilot coding/cloud agent availability is confirmed if that surface is included.
+
+## Safety
+
+- [ ] Secrets, production configs, incident logs, and customer data are out of scope.
+- [ ] Screen sharing rules are agreed.
+- [ ] No new auto-review, model, MCP, or policy changes will be enabled without an owner.
+- [ ] Sensitive findings will be captured locally, not pasted into public notes.
+
+## Measurement
+
+- [ ] The team knows where to find usage or billing views for the relevant plan.
+- [ ] At least one measurable baseline is selected: usage, review count, prompt retries, PR churn, or time-to-merge.
+- [ ] Owners are assigned for follow-up actions.
diff --git a/resources/instruction-diet-worksheet.md b/resources/instruction-diet-worksheet.md
new file mode 100644
index 0000000..7eda42b
--- /dev/null
+++ b/resources/instruction-diet-worksheet.md
@@ -0,0 +1,39 @@
+# Instruction Diet Worksheet
+
+Use this to reduce always-on Copilot context while preserving useful guidance.
+
+## Instruction file under review
+
+```markdown
+File:
+Owner:
+Last reviewed:
+Primary surfaces affected:
+```
+
+## Sort the content
+
+| Current instruction or section | Keep always-on? | Better home | Reason |
+| --- | --- | --- | --- |
+| | Yes / No | Repo instruction / path instruction / prompt / skill / agent / docs / remove | |
+
+## Where guidance belongs
+
+| Guidance type | Best home |
+| --- | --- |
+| Stable project purpose, stack, and must-follow rules | `.github/copilot-instructions.md` |
+| Rules for specific languages, directories, or file types | `.github/instructions/**/*.instructions.md` |
+| Repeatable task workflow | `.github/prompts/*.prompt.md` |
+| Specialized workflow guidance loaded on demand | `.github/skills/*/SKILL.md` |
+| Specialist role with tools and boundaries | `.github/agents/*.agent.md` |
+| Long standards, diagrams, onboarding, or architecture detail | Linked docs |
+
+## Checklist
+
+- [ ] Remove team history and onboarding essays from always-on instructions.
+- [ ] Remove rules that the model can infer from code.
+- [ ] Split path-specific rules by file type or directory.
+- [ ] Replace copied policy text with links to source-of-truth docs.
+- [ ] Keep examples short and concrete.
+- [ ] Check for conflicts across personal, repo, path-specific, agent, and organization guidance.
+- [ ] Re-test one representative task after trimming.
diff --git a/resources/monday-morning-checklist.md b/resources/monday-morning-checklist.md
new file mode 100644
index 0000000..a359c8c
--- /dev/null
+++ b/resources/monday-morning-checklist.md
@@ -0,0 +1,29 @@
+# Monday-Morning Checklist
+
+Adopt three habits first. Add more after the team has a baseline.
+
+## Individual developer habits
+
+- [ ] Start a new session or thread for unrelated work.
+- [ ] Use planning before multi-file implementation.
+- [ ] Reference files, selections, issues, or PRs instead of broad directories.
+- [ ] Summarize or filter logs before sending them to Copilot.
+- [ ] Choose the lowest-cost model or surface that can reliably complete the task.
+- [ ] Ask for a diagnosis or plan before implementation when the task is ambiguous.
+- [ ] End important tasks with a short handoff summary.
+
+## Repository habits
+
+- [ ] Keep `.github/copilot-instructions.md` short and stable.
+- [ ] Move targeted rules to path-specific instructions, prompts, skills, or agents.
+- [ ] Keep issue and PR templates specific enough for Copilot and humans.
+- [ ] Exclude generated, large, sensitive, and irrelevant files where policy supports it.
+- [ ] Keep a living surface matrix or team playbook current.
+
+## Admin and platform habits
+
+- [ ] Review content exclusion and model access policies.
+- [ ] Review automatic code review settings.
+- [ ] Set or confirm budgets and alerts where available.
+- [ ] Identify one usage or quality dashboard owner.
+- [ ] Revisit guidance after major Copilot product changes.
diff --git a/specs/curriculum-spec-1.md b/specs/curriculum-spec-1.md
new file mode 100644
index 0000000..7d3731c
--- /dev/null
+++ b/specs/curriculum-spec-1.md
@@ -0,0 +1,264 @@
+# Token Optimization Curriculum — Extension Spec
+
+**Purpose:** Extend the existing `DevExpGbb/token-optimization` repo (currently CLI-focused) to cover the **VS Code IDE** and **GitHub.com (web)** surfaces of GitHub Copilot. This document is a work brief for GitHub Copilot (coding agent or interactive) to plan and produce the additions in parallel with the companion slide deck.
+
+**Repo:** https://github.com/DevExpGbb/token-optimization
+**Owner:** Cody Carlson (codycarlson@microsoft.com), Sr Solution Engineer GBB, Microsoft
+**Date:** 2026-05-12
+
+---
+
+## 1. Three Outcome Goals (the north star)
+
+Every artifact added by this work must visibly advance at least one of these. If an addition doesn't, drop it.
+
+### Goal 1 — Readers feel comfortable controlling cost
+Audience leaves knowing **what gets metered, where it burns, and which dials they own**, surface by surface. Includes:
+- A clear mental model of the **usage-based billing (UBB)** transition that takes effect **June 1, 2026** (announced April 27, 2026 — see `https://github.blog/news-insights/company-news/github-copilot-is-moving-to-usage-based-billing/`).
+- A unit-economics view: premium-request entitlements today (50 / 300 / 1,500 per month on Free / Pro / Pro+), GitHub AI Credits and per-token metering tomorrow.
+- Surface-specific cost drivers (CLI re-send, IDE context attachments, web coding-agent runs).
+- Governance: how to make spend visible without becoming the spend police.
+
+### Goal 2 — Apply context-management & context-engineering best practices to existing projects
+Audience can walk into their **own repo on Monday** and apply the practices. Includes:
+- **Repo-level**: `.github/copilot-instructions.md`, `.instructions.md` path-scoped rules, `.prompt.md` files, `.agent.md` custom agents, `.chatmode.md` custom chat modes.
+- **Workspace-level**: `.vscode/mcp.json` (per-repo MCP) vs user-profile MCP, attachment hygiene (`#file`, `#selection`, `#codebase`, drag-and-drop pinned context).
+- **Mode-level**: Ask vs Edit vs Agent vs Plan — when each is the cheap right tool.
+- **Org-level**: Copilot Spaces for curated cross-repo knowledge (replaced Knowledge Bases on Sept 12, 2025 — `https://docs.github.com/en/copilot/concepts/context/spaces`).
+
+### Goal 3 — Practical, hands-on demo and takeaway exercises
+Every chapter ends with something the reader **does**, not just reads. Includes:
+- A `templates/` directory in the repo with copy-paste starter files for instructions, prompt files, MCP config, and chat modes.
+- Before/after exercises per surface (same task, naive flow vs engineered flow, measure the delta).
+- A "Monday morning checklist" the reader can run in <30 minutes against their own repo.
+
+---
+
+## 2. Current Repo State (read-only context for Copilot)
+
+```
+/
+├── README.md
+├── index.html # Landing page
+├── src/ # Budgeting web app (interactive demo)
+├── labs/ # Track-based curriculum labs
+│ ├── README.md # 1h / 2h / 4h delivery outlines
+│ ├── 00-foundations.md
+│ ├── 01-... # through 08
+│ ├── 08-applied-repo-review-and-adoption.md
+│ └── MIGRATION.md
+├── .github/workflows/ # Pages deploy, etc.
+└── (deck lives separately, this spec covers what's in-repo)
+```
+
+PR #5 ("Add context engineering curriculum tracks") is already iterating on the labs structure. Do not append another long sequence of surface-specific labs. Consolidate the existing and proposed topics into a smaller concept-driven spine.
+
+---
+
+## 3. Proposed Additions
+
+### 3.1 New top-level directories
+
+```
+templates/ # Copy-paste starters (Goal 3)
+├── README.md # How to use these in your repo
+├── copilot-instructions.md # Annotated example for .github/
+├── instructions/
+│ ├── frontend.instructions.md # applyTo: 'src/web/**'
+│ ├── tests.instructions.md # applyTo: '**/*.spec.ts'
+│ └── docs.instructions.md # applyTo: '**/*.md'
+├── prompts/
+│ ├── plan-feature.prompt.md
+│ ├── review-pr.prompt.md
+│ └── triage-issue.prompt.md
+├── chatmodes/
+│ └── planner.chatmode.md # Custom Plan-style chat mode
+├── agents/
+│ └── doc-writer.agent.md # Example custom agent
+└── mcp/
+ ├── workspace.mcp.json # .vscode/mcp.json starter
+ └── README.md # When workspace vs user profile
+
+exercises/ # Before/after labs (Goal 3)
+├── README.md
+├── 01-vscode-context-attachments/
+│ ├── README.md # Task, baseline, instrumented run
+│ ├── naive-transcript.md # What "no engineering" costs
+│ └── engineered-transcript.md # What hygiene saves
+├── 02-vscode-instructions-stack/
+├── 03-cli-session-scope/
+├── 04-cli-agent-tool-control/
+├── 05-github-coding-agent-scope/
+├── 06-github-code-review-hygiene/
+├── 07-spaces-vs-adhoc-prompts/
+└── 08-monday-morning-audit/ # The 30-minute self-audit
+```
+
+### 3.2 Use-case lab tracks (maximum 9 total labs)
+
+Use nine labs or fewer total. Do not create nine labs per track. The curriculum should give learners a track that matches their use case while keeping shared concepts in one maintainable system.
+
+The track model:
+
+- **Shared foundation:** everyone starts with the same token optimization and context engineering mental model.
+- **Use-case tracks:** learners choose VS Code/IDE, GitHub Copilot CLI, or GitHub.com/code review labs based on how they use Copilot most often.
+- **Shared closeout:** everyone returns to measurement, governance, repo review, and adoption.
+
+Each track lab follows the same shape: *Concept -> Surface mechanics -> Levers -> Hands-on -> Checklist*. Similar ideas intentionally repeat across tracks, but the exercises and screenshots should match the learner's surface.
+
+| File | Title | Topics grouped here | Primary hands-on |
+|------|-------|---------------------|------------------|
+| `labs/00-foundations.md` | Foundations for every Copilot surface | Token mental model, context inputs, quality waste, billing nuance, five levers | Identify context waste in one sample workflow |
+| `labs/01-ide-context-and-prompt-flow.md` | VS Code/IDE track: context and prompt flow | Ask/Edit/Agent/Plan, attachments, `#selection`, `#file`, `#codebase`, chat/session boundaries | Rewrite a broad IDE request into a scoped prompt with deliberate attachments |
+| `labs/02-ide-instructions-tools-and-mcp.md` | VS Code/IDE track: instructions, tools, and MCP | `.github/copilot-instructions.md`, `.instructions.md`, `.prompt.md`, `.chatmode.md`, workspace MCP, model picker | Split a bloated IDE setup into targeted repo, path, prompt, chat mode, and MCP assets |
+| `labs/03-cli-context-and-tool-output.md` | GHCP CLI track: session context and tool output | `/clear`, `/compact`, focused sessions, command output filtering, prompt discipline, context visibility | Turn a noisy CLI troubleshooting session into a focused low-context workflow |
+| `labs/04-cli-agents-tools-and-cost-control.md` | GHCP CLI track: agents, tools, and cost control | Subagents, MCP/tool scope, model choice, approvals, usage visibility, durable handoffs | Decide when to do work directly, delegate, or summarize before continuing |
+| `labs/05-github-web-context-and-coding-agent.md` | GitHub.com track: web context and coding agent | Repo/issue/PR page context, Copilot Spaces, coding agent issue shape, `copilot/` branches | Convert a vague issue into a scoped coding-agent task with acceptance criteria |
+| `labs/06-github-code-review-and-pr-hygiene.md` | GitHub.com/code review track: PR and review hygiene | Small PRs, review instructions, automatic review policy, code review limitations, human gates | Improve a PR description and review-instruction set for higher-signal Copilot review |
+| `labs/07-measurement-billing-and-governance.md` | Shared closeout: measurement, billing, and governance | UBB mental model, premium requests, dashboards, budgets, model policies, content exclusion, eval signals | Build a spend and quality visibility checklist without shaming users |
+| `labs/08-applied-repo-review-and-adoption.md` | Shared closeout: applied repo review and adoption | Customer environment review, ideal workshop repo, Monday-morning audit, 30-day operating model, next steps | Run the audit and pick three changes to implement |
+
+This replaces the proposed `labs/11` through `labs/18` expansion. The topics remain, but they become three use-case tracks inside a maximum-nine-lab curriculum instead of eight additional standalone chapters.
+
+#### Track bundles
+
+| Learner use case | Run these labs |
+| --- | --- |
+| VS Code/IDE users | 00, 01, 02, 07, 08 |
+| GitHub Copilot CLI users | 00, 03, 04, 07, 08 |
+| GitHub.com/code review users | 00, 05, 06, 07, 08 |
+| Full cross-surface practitioner | 00 through 08 |
+
+#### Merge map
+
+| Existing/proposed material | Move into |
+| --- | --- |
+| Current `00` and `01` | `00-foundations.md` |
+| Current `04` and `06`, proposed VS Code modes and attachments labs | `01-ide-context-and-prompt-flow.md` |
+| Current `02`, proposed VS Code instructions/prompts/chat modes and MCP hygiene labs | `02-ide-instructions-tools-and-mcp.md` |
+| Current `03` and CLI parts of current `04`/`06` | `03-cli-context-and-tool-output.md` |
+| Current `05`, CLI agents/tools material, and CLI usage visibility | `04-cli-agents-tools-and-cost-control.md` |
+| Proposed GitHub.com surface map, proposed Spaces lab, proposed coding agent lab | `05-github-web-context-and-coding-agent.md` |
+| Code review parts of current `11`/`12` and proposed PR review material | `06-github-code-review-and-pr-hygiene.md` |
+| Current `07`, current `08`, governance parts of current `12`, proposed cross-surface governance | `07-measurement-billing-and-governance.md` |
+| Current `09`, `10`, `12`, Monday-morning checklist | `08-applied-repo-review-and-adoption.md` |
+
+Update `labs/README.md` with delivery options that reference the track bundles:
+
+| Track | Use labs |
+| --- | --- |
+| 1-hour awareness | 00, one selected track lab, 07, 08 |
+| 1-hour IDE-focused | 00, 01, 02, 08 |
+| 1-hour CLI-focused | 00, 03, 04, 08 |
+| 1-hour Web/code-review focused | 00, 05, 06, 08 |
+| 2-hour practitioner | 00, one complete track bundle, 07, 08 |
+| 4-hour applied review | 00 through 08, with customer/self-review time |
+
+### 3.3 Update existing files
+
+- `README.md` — Add "Tracks" section: CLI / IDE / Web / Full. Link `templates/` and `exercises/`.
+- `labs/README.md` — New delivery presets that include the IDE and Web chapters.
+- `index.html` — Add cards for IDE and Web tracks alongside the existing CLI material.
+
+---
+
+## 4. Concept-to-Surface Mapping (so chapters stay tight)
+
+This table is the editorial backbone. Each use-case track should cover the same core levers, but the mechanics and exercise should match the track surface. Cross-surface comparison belongs primarily in the shared foundation and closeout labs.
+
+| Concept | CLI (existing) | VS Code IDE (new) | GitHub.com Web (new) |
+|---|---|---|---|
+| **Primary cost driver** | Re-send of full history each turn | Attachments + agent tool-call sprawl | Coding-agent runs + Spaces queries |
+| **Cheap-mode default** | `/explore` for read-only | **Ask mode** for read-only | PR review for narrow scope |
+| **Expensive-mode** | `/delegate`, parallel agents | **Agent mode** with broad `#codebase` | Coding agent on vague issues |
+| **Context hygiene** | `/clear`, `/compact`, `/context` | New chat, attachment pinning, `#codebase` only when needed | Scoped issue body, narrow Space sources |
+| **Persistent rules** | `AGENTS.md`, slash commands | `.github/copilot-instructions.md` + `.instructions.md` | Repo custom instructions (rendered to agent) |
+| **Tool/scope control** | Specialist agents, narrow blast radius | Custom **chat modes**, **MCP per workspace** | Coding-agent allowlists, Space sources |
+| **Measurement** | OTel exporters, `/usage` | VS Code chat history, token telemetry | Org-level usage dashboards, audit log |
+| **Monday-morning win** | Add a `/compact` checkpoint | Add `.github/copilot-instructions.md` | Convert one Slack-thread question into a Space |
+
+---
+
+## 5. Authoring Conventions (so Copilot's output looks like the rest)
+
+- **Voice:** Direct, present tense. Engineer-to-engineer. No "in this section we will…".
+- **Length per chapter:** ~600–1200 words. Code blocks count.
+- **Each chapter ends with:**
+ 1. A **5-bullet checklist** the reader can act on today.
+ 2. A **hands-on exercise** that links to `exercises/
+ Use the same token optimization principles through the surface your team uses most often. +
+Modes, attachments, instructions, prompt files, chat modes, and workspace MCP.
+Focused sessions, filtered tool output, agents, MCP boundaries, and handoffs.
+Issue context, Spaces, coding agent scope, PR hygiene, and review signal.
++ This static calculator uses your assumptions and rates. It does not call GitHub APIs and does not hardcode future per-token pricing. +
+