Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 2 additions & 8 deletions AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,12 +37,6 @@
<!-- lore:019cb615-0b10-7bbc-a7db-50111118c200 -->
* **Lore auto-recovery can infinite-loop without re-entrancy guard**: Three v0.5.2 bugs causing excessive background LLM requests: (1) Auto-recovery loop — session.error handler injected recovery prompt → could overflow again → loop. Fix: recoveringSessions Set as re-entrancy guard. (2) Curator ran every idle — \`onIdle || afterTurns\` short-circuited (onIdle=true). Fix: \`||\` → \`&&\`. Lesson: boolean flag gating numeric threshold needs AND not OR. (3) shouldSkip() fell back to session.list() on unknown sessions. Fix: remove list fallback, cache in activeSessions.

<!-- lore:019cb3e6-da66-7534-a573-30d2ecadfd53 -->
* **Returning bare promises loses async function from error stack traces**: When an \`async\` function returns another promise without \`await\`, the calling function disappears from error stack traces if the inner promise rejects. A function that drops \`async\` and does \`return someAsyncCall()\` loses its frame entirely. Fix: keep the function \`async\` and use \`return await someAsyncCall()\`. This matters for debugging — the intermediate function name in the stack trace helps locate which code path triggered the failure. ESLint rule \`no-return-await\` is outdated; modern engines optimize \`return await\` in async functions.

<!-- lore:019cd20d-f42c-71bf-9da5-b2dd52c5014d -->
* **sgdisk reserves 33 sectors for backup GPT, shrinking partition vs original layout**: When recreating a GPT partition with \`sgdisk\`, it sets LastUsableLBA 33 sectors short of disk end for backup GPT. If the original partition extended to the last sector (common for factory-formatted exFAT SD cards), the recreated partition is too small. Windows validates exFAT VolumeLength matches GPT partition size — mismatch causes 'drive not formatted' error. Fix: patch the exFAT VBR's VolumeLength to match GPT partition size (LastLBA - FirstLBA + 1), then recalculate boot region checksum (sector 11). Do NOT extend LastUsableLBA past backup GPT header location.

<!-- lore:019c8f4f-67ca-7212-a8c4-8a75b230ceea -->
* **Test DB isolation via LORE\_DB\_PATH and Bun test preload**: Lore test suite uses isolated temp DB via test/setup.ts preload (bunfig.toml). Preload sets LORE\_DB\_PATH to mkdtempSync path before any imports of src/db.ts; afterAll cleans up. src/db.ts checks LORE\_DB\_PATH first. agents-file.test.ts needs beforeEach cleanup for intra-file isolation and TEST\_UUIDS cleanup in afterAll (shared with ltm.test.ts). Individual test files don't need close() calls — preload handles DB lifecycle.

Expand All @@ -55,13 +49,13 @@
* **Lore logging: LORE\_DEBUG gating for info/warn, always-on for errors**: src/log.ts provides three levels: log.info() and log.warn() are suppressed unless LORE\_DEBUG=1 or LORE\_DEBUG=true; log.error() always emits. All write to stderr with \[lore] prefix. This exists because OpenCode TUI renders all stderr as red error text — routine status messages (distillation counts, pruning stats, consolidation) were alarming users. Rule: use log.info() for successful operations and status, log.warn() for non-actionable oddities (e.g. dropping trailing messages), log.error() only in catch blocks for real failures. Never use console.error directly in plugin source files.

<!-- lore:019cb12a-c957-7e24-b3f5-6869f3429d13 -->
* **Lore release process: craft + issue-label publish**: Lore/Craft release pipeline and gotchas: (1) Trigger release.yml via workflow\_dispatch with version='auto' — craft determines version and creates GitHub issue. Label 'accepted' → publish.yml runs craft publish with npm OIDC. Don't create release branches or bump package.json manually. (2) GitHub App must be installed per-repo ('Only select repositories' → add at Settings → Installations). APP\_ID/APP\_PRIVATE\_KEY in \`production\` environment. Symptom: 404 on GET /repos/.../installation. (3) npm OIDC only works for publish — \`npm info\` needs NPM\_TOKEN for private packages (public works without auth).
* **Lore release process: craft + issue-label publish**: Lore/Craft release pipeline: (1) Trigger release.yml via workflow\_dispatch with version='auto' — craft determines version and creates GitHub issue. Label 'accepted' → publish.yml runs craft publish with npm OIDC. Don't create release branches or bump package.json manually. (2) GitHub App must be installed per-repo. APP\_ID/APP\_PRIVATE\_KEY in \`production\` environment. Symptom: 404 on GET /repos/.../installation. (3) npm OIDC only works for publish — \`npm info\` needs NPM\_TOKEN for private packages.

<!-- lore:019cb200-0001-7000-8000-000000000001 -->
* **PR workflow for opencode-lore: branch → PR → auto-merge**: All changes (including minor fixes and test-only changes) must go through a branch + PR + auto-merge, never pushed directly to main. Workflow: (1) git checkout -b \<type>/\<slug>, (2) commit, (3) git push -u origin HEAD, (4) gh pr create --title "..." --body "..." --base main, (5) gh pr merge --auto --squash \<PR#>. Branch name conventions follow merged PR history: fix/\<slug>, feat/\<slug>, chore/\<slug>. Auto-merge with squash is required (merge commits disallowed). Never push directly to main even for trivial changes.

### Preference

<!-- lore:019ca19d-fc02-7657-b2e9-7764658c01a5 -->
* **Code style**: User prefers no backwards-compat shims — fix callers directly. Prefer explicit error handling over silent failures. Derive thresholds from existing constants rather than hardcoding magic numbers (e.g., use \`raw.length <= COL\_COUNT\` instead of \`n < 10\_000\`). In CI, define shared env vars at workflow level, not per-job. Always dry-run before bulk destructive operations (SELECT before DELETE to verify row count).
* **Code style**: No backwards-compat shims — fix callers directly. Prefer explicit error handling over silent failures. Derive thresholds from existing constants rather than hardcoding magic numbers. In CI, define shared env vars at workflow level, not per-job. Dry-run before bulk destructive operations (SELECT before DELETE). Prefer \`jq\`/\`sed\`/\`awk\` over \`node -e\` for JSON manipulation in CI scripts.
<!-- End lore-managed section -->
8 changes: 8 additions & 0 deletions src/curator.ts
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,11 @@ export async function run(input: {
path: { id: workerID },
query: { limit: 2 },
});
// Rotate worker session so the next call starts fresh — prevents
// accumulating multiple assistant messages with reasoning/thinking parts,
// which providers reject ("Multiple reasoning_opaque values").
workerSessions.delete(input.sessionID);

const last = msgs.data?.at(-1);
if (!last || last.info.role !== "assistant")
return { created: 0, updated: 0, deleted: 0 };
Expand Down Expand Up @@ -222,6 +227,9 @@ export async function consolidate(input: {
path: { id: workerID },
query: { limit: 2 },
});
// Rotate worker session — see run() comment.
workerSessions.delete(input.sessionID);

const last = msgs.data?.at(-1);
if (!last || last.info.role !== "assistant") return { updated: 0, deleted: 0 };

Expand Down
28 changes: 27 additions & 1 deletion src/db.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ import { Database } from "bun:sqlite";
import { join, dirname } from "path";
import { mkdirSync } from "fs";

const SCHEMA_VERSION = 5;
const SCHEMA_VERSION = 6;

const MIGRATIONS: string[] = [
`
Expand Down Expand Up @@ -153,6 +153,32 @@ const MIGRATIONS: string[] = [
ALTER TABLE distillations ADD COLUMN archived INTEGER NOT NULL DEFAULT 0;
CREATE INDEX IF NOT EXISTS idx_distillation_archived ON distillations(archived);
`,
`
-- Version 6: Compound indexes for common multi-column query patterns.
-- Almost every query filters on (project_id, session_id) but only single-column
-- indexes existed, forcing SQLite to pick one and scan for the rest.

-- temporal_messages: covers bySession, search-LIKE fallback, count, undistilledCount
CREATE INDEX IF NOT EXISTS idx_temporal_project_session ON temporal_messages(project_id, session_id);
-- temporal_messages: covers undistilled() and undistilledCount() with distilled filter
CREATE INDEX IF NOT EXISTS idx_temporal_project_session_distilled ON temporal_messages(project_id, session_id, distilled);
-- temporal_messages: covers pruning TTL pass and size-cap pass (distilled=1 ordered by created_at)
CREATE INDEX IF NOT EXISTS idx_temporal_project_distilled_created ON temporal_messages(project_id, distilled, created_at);

-- distillations: covers loadForSession, latestObservations, searchDistillations, resetOrphans
CREATE INDEX IF NOT EXISTS idx_distillation_project_session ON distillations(project_id, session_id);
-- distillations: covers gen0Count, loadGen0, gradient prefix loading (archived filter)
CREATE INDEX IF NOT EXISTS idx_distillation_project_session_gen_archived ON distillations(project_id, session_id, generation, archived);

-- Drop redundant single-column indexes that are now left-prefixes of compound indexes.
-- idx_temporal_project is a prefix of idx_temporal_project_session.
-- idx_distillation_project is a prefix of idx_distillation_project_session.
-- idx_temporal_distilled is a prefix of no compound index but is low-selectivity (0/1)
-- and all queries that use it also filter on project_id — covered by the new compounds.
DROP INDEX IF EXISTS idx_temporal_project;
DROP INDEX IF EXISTS idx_temporal_distilled;
DROP INDEX IF EXISTS idx_distillation_project;
`,
];

function dataDir() {
Expand Down
8 changes: 8 additions & 0 deletions src/distillation.ts
Original file line number Diff line number Diff line change
Expand Up @@ -388,6 +388,11 @@ async function distillSegment(input: {
path: { id: workerID },
query: { limit: 2 },
});
// Rotate worker session so the next call starts fresh — prevents
// accumulating multiple assistant messages with reasoning/thinking parts,
// which providers reject ("Multiple reasoning_opaque values").
workerSessions.delete(input.sessionID);

const last = msgs.data?.at(-1);
if (!last || last.info.role !== "assistant") return null;

Expand Down Expand Up @@ -438,6 +443,9 @@ async function metaDistill(input: {
path: { id: workerID },
query: { limit: 2 },
});
// Rotate worker session — see distillSegment() comment.
workerSessions.delete(input.sessionID);

const last = msgs.data?.at(-1);
if (!last || last.info.role !== "assistant") return null;

Expand Down
65 changes: 39 additions & 26 deletions src/reflect.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import { tool } from "@opencode-ai/plugin/tool";
import * as temporal from "./temporal";
import * as ltm from "./ltm";
import * as log from "./log";
import { db, ensureProject } from "./db";
import { serialize, inline, h, p, ul, lip, liph, t, root } from "./markdown";

Expand Down Expand Up @@ -114,34 +115,46 @@ export function createRecallTool(projectPath: string, knowledgeEnabled = true):
const scope = args.scope ?? "all";
const sid = context.sessionID;

const temporalResults =
scope === "knowledge"
? []
: temporal.search({
projectPath,
query: args.query,
sessionID: scope === "session" ? sid : undefined,
limit: 10,
});
let temporalResults: temporal.TemporalMessage[] = [];
if (scope !== "knowledge") {
try {
temporalResults = temporal.search({
projectPath,
query: args.query,
sessionID: scope === "session" ? sid : undefined,
limit: 10,
});
} catch (err) {
log.error("recall: temporal search failed:", err);
}
}

const distillationResults =
scope === "knowledge"
? []
: searchDistillations({
projectPath,
query: args.query,
sessionID: scope === "session" ? sid : undefined,
limit: 5,
});
let distillationResults: Distillation[] = [];
if (scope !== "knowledge") {
try {
distillationResults = searchDistillations({
projectPath,
query: args.query,
sessionID: scope === "session" ? sid : undefined,
limit: 5,
});
} catch (err) {
log.error("recall: distillation search failed:", err);
}
}

const knowledgeResults =
!knowledgeEnabled || scope === "session"
? []
: ltm.search({
query: args.query,
projectPath,
limit: 10,
});
let knowledgeResults: ltm.KnowledgeEntry[] = [];
if (knowledgeEnabled && scope !== "session") {
try {
knowledgeResults = ltm.search({
query: args.query,
projectPath,
limit: 10,
});
} catch (err) {
log.error("recall: knowledge search failed:", err);
}
}

return formatResults({
temporalResults,
Expand Down
19 changes: 18 additions & 1 deletion test/db.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,24 @@ describe("db", () => {
const row = db().query("SELECT version FROM schema_version").get() as {
version: number;
};
expect(row.version).toBe(5);
expect(row.version).toBe(6);
});

test("compound indexes exist for common query patterns", () => {
const indexes = db()
.query("SELECT name FROM sqlite_master WHERE type='index' AND name LIKE 'idx_%' ORDER BY name")
.all() as Array<{ name: string }>;
const names = indexes.map((i) => i.name);
// Compound indexes added in version 6
expect(names).toContain("idx_temporal_project_session");
expect(names).toContain("idx_temporal_project_session_distilled");
expect(names).toContain("idx_temporal_project_distilled_created");
expect(names).toContain("idx_distillation_project_session");
expect(names).toContain("idx_distillation_project_session_gen_archived");
// Redundant single-column indexes should be dropped
expect(names).not.toContain("idx_temporal_project");
expect(names).not.toContain("idx_temporal_distilled");
expect(names).not.toContain("idx_distillation_project");
});

test("ensureProject creates and returns id", () => {
Expand Down