Old software engineering ideas are not obsolete. Many of them become more important in the agentic AI era.
This repository adapts timeless engineering content into modern agentic artifacts: skills, prompts, checklists, workflows, review rubrics, and learning drills.
Each case study starts with a valuable source: a talk, video, article, book chapter, or essay. It extracts the durable principles, maps them to today's LLM-assisted development workflows, and turns them into reusable artifacts.
The goal is not to summarize old content.
The goal is to preserve its engineering wisdom and make it executable inside modern AI-assisted workflows.
Raw valuable content
-> Core principles extraction
-> Modern context mapping
-> Agentic interpretation
-> Application layer selection
-> Concrete artifact generation
This project treats adaptation as a layer. The same source can produce different outputs depending on the chosen application layer:
- Skill layer
- Prompt layer
- Checklist layer
- Workflow layer
- Agent instruction layer
- Review rubric layer
- Learning drill layer
- Benchmark layer
| Case | Source Theme | Modern Adaptation | Artifacts |
|---|---|---|---|
| 001 - Abstraction, Hardware, and LLM Skills | Abstraction cost, hardware awareness, cache behavior, runtime reality | Agentic coding needs grounding across prompt, repo, runtime, data, and failure layers | 4 skills, prompts, checklists, workflow |
.
+-- cases/
| |-- README.md
| `-- 001-abstraction-hardware-llm-skills/
| |-- README.md
| |-- source-notes.md
| |-- principle-extraction.md
| |-- modern-mapping.md
| |-- adaptation-layers.md
| |-- reflection.md
| `-- artifacts/
| |-- skills/
| |-- prompts/
| |-- checklists/
| `-- workflows/
+-- docs/
| |-- README.md
| |-- adaptation-method.md
| |-- glossary.md
| |-- taxonomy.md
| `-- using-artifacts.md
`-- templates/
|-- adaptation-layer-template.md
|-- case-study-template.md
`-- skill-template.md
- Pick a case study.
- Read its principle extraction and modern mapping.
- Choose an artifact that matches your workflow.
- Copy or install the artifact into your agent, prompt library, review flow, or team docs.
- Adapt it to your own project constraints.
See docs/using-artifacts.md for practical usage guidance.
abstraction-cost-auditor: Reviews whether an abstraction pays for itself.cache-aware-performance-reviewer: Reviews performance beyond Big-O by considering memory, cache, allocations, IO, and runtime costs.llm-code-grounding-loop: Forces LLM-generated code to stay grounded in real project files, data flow, runtime behavior, and verification.three-level-learning-drill: Teaches concepts by moving through API, runtime, and data/resource levels.
The first case also includes reusable prompts, checklists, and a source-to-artifact workflow under cases/001-abstraction-hardware-llm-skills/artifacts/.
- Preserve the durable engineering idea.
- Do not overfit the adaptation to one tool.
- Make the output executable inside real workflows.
- Prefer concrete artifacts over commentary.
- Keep the historical context visible.
- Do not copy full copyrighted source material into the repo unless you have rights to do so.
See CONTRIBUTING.md.
See CHANGELOG.md. The first public release is v0.1.0.
MIT. See LICENSE.