You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Apply the modular agent architecture from #14 to Mnemon self-evolution.
This proposal defines self-evolution as a family of installable harness modules that attach to existing host agents. The host agent keeps its own runtime loop, prompt assembly, tool routing, permission model, UI, native skill runtime, and subagent execution. Mnemon provides self-evolution modules around that runtime.
The goal is not to build a new agent runtime. The goal is to let capable existing agents gain self-evolution behavior by installing Mnemon harness modules.
Architecture Principles
Keep the host agent in control of execution, prompt assembly, tool routing, native skill discovery, subagent execution, permissions, and UI.
Keep Mnemon responsible for durable harness state: memory, skill lifecycle state, evidence, proposals, reports, and installable protocols.
Treat host-native files as generated views, bindings, or projections when possible.
Use Markdown-first protocols for portability: GUIDE files, hook prompts, protocol skills, and subagent specs.
Use setup scripts as concrete reference implementations, not as the architecture itself.
Prefer proposal-first governance for self-modifying behavior.
Memory Model
The memory loop uses a hot/cold memory model:
Working Memory: bounded Markdown loaded into the host prompt. It is model-friendly but small.
Long-Term Memory: Mnemon-backed durable memory outside the prompt. It is engineering-friendly and scalable.
Consolidation: dreaming writes durable working-memory content into Mnemon, then compacts or evicts working memory.
This preserves the usefulness of Markdown memory while avoiding the capacity ceiling of a single always-loaded file.
Skill Model
The skill loop carries procedural self-evolution:
Active skills are visible to the host agent after Prime sync.
Stale skills are retained for review, repair, restore, or consolidation.
Archived skills are retained for audit and recovery.
Online work records evidence only.
Curator review produces proposals.
Approved lifecycle changes go through skill_manage.
The host agent still uses its native skill discovery and execution model. Mnemon owns canonical skill lifecycle state.
Future Modules
The same modular architecture should support additional self-evolution modules:
Eval loop: collect outcomes, run benchmarks, and feed failures into proposals.
Risk loop: scan proposed memory or skill changes before they become active.
Review loop: coordinate approvals, checkpoints, reports, and rollback context.
Policy loop: maintain host-specific safety and permission guidance.
Each module should remain independently installable.
Non-Goals
Do not replace the host agent runtime.
Do not require a daemon for the basic harness.
Do not require heavyweight host adapters as the primary architecture.
Do not silently mutate high-risk files in the background.
Do not treat long-term memory recall as automatic prompt injection.
Do not require one universal skill format across all host agents.
Current State
Formal harness docs: docs/harness/
English docs: docs/harness/
Chinese docs: docs/zh/harness/
Interactive pages: docs/site/memory-loop/site.html and docs/site/skill-loop/site.html
Proposal
Apply the modular agent architecture from #14 to Mnemon self-evolution.
This proposal defines self-evolution as a family of installable harness modules that attach to existing host agents. The host agent keeps its own runtime loop, prompt assembly, tool routing, permission model, UI, native skill runtime, and subagent execution. Mnemon provides self-evolution modules around that runtime.
Relationship To #14
Issue #14 defines the broader modular agent architecture: host agents remain the runtime, while external modules attach through hooks, skills/protocols, subagents, filesystem state, environment configuration, and setup scripts.
This issue applies that architecture specifically to self-evolution. It focuses on memory, skill evolution, and future evaluation/risk/review modules.
Core Direction
The self-evolution harness is organized around attachable loops:
The goal is not to build a new agent runtime. The goal is to let capable existing agents gain self-evolution behavior by installing Mnemon harness modules.
Architecture Principles
Memory Model
The memory loop uses a hot/cold memory model:
This preserves the usefulness of Markdown memory while avoiding the capacity ceiling of a single always-loaded file.
Skill Model
The skill loop carries procedural self-evolution:
skill_manage.The host agent still uses its native skill discovery and execution model. Mnemon owns canonical skill lifecycle state.
Future Modules
The same modular architecture should support additional self-evolution modules:
Each module should remain independently installable.
Non-Goals
Current State
docs/harness/docs/harness/docs/zh/harness/docs/site/memory-loop/site.htmlanddocs/site/skill-loop/site.htmlharness/memory-loop/harness/skill-loop/Open Work
Acceptance Direction
The self-evolution proposal is accepted if the project direction remains:
Related