diff --git a/AGENTS.md b/AGENTS.md index e7366d6..6906214 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -37,12 +37,6 @@ * **Lore auto-recovery can infinite-loop without re-entrancy guard**: Three v0.5.2 bugs causing excessive background LLM requests: (1) Auto-recovery loop — session.error handler injected recovery prompt → could overflow again → loop. Fix: recoveringSessions Set as re-entrancy guard. (2) Curator ran every idle — \`onIdle || afterTurns\` short-circuited (onIdle=true). Fix: \`||\` → \`&&\`. Lesson: boolean flag gating numeric threshold needs AND not OR. (3) shouldSkip() fell back to session.list() on unknown sessions. Fix: remove list fallback, cache in activeSessions. - -* **Returning bare promises loses async function from error stack traces**: When an \`async\` function returns another promise without \`await\`, the calling function disappears from error stack traces if the inner promise rejects. A function that drops \`async\` and does \`return someAsyncCall()\` loses its frame entirely. Fix: keep the function \`async\` and use \`return await someAsyncCall()\`. This matters for debugging — the intermediate function name in the stack trace helps locate which code path triggered the failure. ESLint rule \`no-return-await\` is outdated; modern engines optimize \`return await\` in async functions. - - -* **sgdisk reserves 33 sectors for backup GPT, shrinking partition vs original layout**: When recreating a GPT partition with \`sgdisk\`, it sets LastUsableLBA 33 sectors short of disk end for backup GPT. If the original partition extended to the last sector (common for factory-formatted exFAT SD cards), the recreated partition is too small. Windows validates exFAT VolumeLength matches GPT partition size — mismatch causes 'drive not formatted' error. Fix: patch the exFAT VBR's VolumeLength to match GPT partition size (LastLBA - FirstLBA + 1), then recalculate boot region checksum (sector 11). Do NOT extend LastUsableLBA past backup GPT header location. - * **Test DB isolation via LORE\_DB\_PATH and Bun test preload**: Lore test suite uses isolated temp DB via test/setup.ts preload (bunfig.toml). Preload sets LORE\_DB\_PATH to mkdtempSync path before any imports of src/db.ts; afterAll cleans up. src/db.ts checks LORE\_DB\_PATH first. agents-file.test.ts needs beforeEach cleanup for intra-file isolation and TEST\_UUIDS cleanup in afterAll (shared with ltm.test.ts). Individual test files don't need close() calls — preload handles DB lifecycle. @@ -55,7 +49,7 @@ * **Lore logging: LORE\_DEBUG gating for info/warn, always-on for errors**: src/log.ts provides three levels: log.info() and log.warn() are suppressed unless LORE\_DEBUG=1 or LORE\_DEBUG=true; log.error() always emits. All write to stderr with \[lore] prefix. This exists because OpenCode TUI renders all stderr as red error text — routine status messages (distillation counts, pruning stats, consolidation) were alarming users. Rule: use log.info() for successful operations and status, log.warn() for non-actionable oddities (e.g. dropping trailing messages), log.error() only in catch blocks for real failures. Never use console.error directly in plugin source files. -* **Lore release process: craft + issue-label publish**: Lore/Craft release pipeline and gotchas: (1) Trigger release.yml via workflow\_dispatch with version='auto' — craft determines version and creates GitHub issue. Label 'accepted' → publish.yml runs craft publish with npm OIDC. Don't create release branches or bump package.json manually. (2) GitHub App must be installed per-repo ('Only select repositories' → add at Settings → Installations). APP\_ID/APP\_PRIVATE\_KEY in \`production\` environment. Symptom: 404 on GET /repos/.../installation. (3) npm OIDC only works for publish — \`npm info\` needs NPM\_TOKEN for private packages (public works without auth). +* **Lore release process: craft + issue-label publish**: Lore/Craft release pipeline: (1) Trigger release.yml via workflow\_dispatch with version='auto' — craft determines version and creates GitHub issue. Label 'accepted' → publish.yml runs craft publish with npm OIDC. Don't create release branches or bump package.json manually. (2) GitHub App must be installed per-repo. APP\_ID/APP\_PRIVATE\_KEY in \`production\` environment. Symptom: 404 on GET /repos/.../installation. (3) npm OIDC only works for publish — \`npm info\` needs NPM\_TOKEN for private packages. * **PR workflow for opencode-lore: branch → PR → auto-merge**: All changes (including minor fixes and test-only changes) must go through a branch + PR + auto-merge, never pushed directly to main. Workflow: (1) git checkout -b \/\, (2) commit, (3) git push -u origin HEAD, (4) gh pr create --title "..." --body "..." --base main, (5) gh pr merge --auto --squash \. Branch name conventions follow merged PR history: fix/\, feat/\, chore/\. Auto-merge with squash is required (merge commits disallowed). Never push directly to main even for trivial changes. @@ -63,5 +57,5 @@ ### Preference -* **Code style**: User prefers no backwards-compat shims — fix callers directly. Prefer explicit error handling over silent failures. Derive thresholds from existing constants rather than hardcoding magic numbers (e.g., use \`raw.length <= COL\_COUNT\` instead of \`n < 10\_000\`). In CI, define shared env vars at workflow level, not per-job. Always dry-run before bulk destructive operations (SELECT before DELETE to verify row count). +* **Code style**: No backwards-compat shims — fix callers directly. Prefer explicit error handling over silent failures. Derive thresholds from existing constants rather than hardcoding magic numbers. In CI, define shared env vars at workflow level, not per-job. Dry-run before bulk destructive operations (SELECT before DELETE). Prefer \`jq\`/\`sed\`/\`awk\` over \`node -e\` for JSON manipulation in CI scripts. diff --git a/src/curator.ts b/src/curator.ts index 9ca4603..c92d14e 100644 --- a/src/curator.ts +++ b/src/curator.ts @@ -113,6 +113,11 @@ export async function run(input: { path: { id: workerID }, query: { limit: 2 }, }); + // Rotate worker session so the next call starts fresh — prevents + // accumulating multiple assistant messages with reasoning/thinking parts, + // which providers reject ("Multiple reasoning_opaque values"). + workerSessions.delete(input.sessionID); + const last = msgs.data?.at(-1); if (!last || last.info.role !== "assistant") return { created: 0, updated: 0, deleted: 0 }; @@ -222,6 +227,9 @@ export async function consolidate(input: { path: { id: workerID }, query: { limit: 2 }, }); + // Rotate worker session — see run() comment. + workerSessions.delete(input.sessionID); + const last = msgs.data?.at(-1); if (!last || last.info.role !== "assistant") return { updated: 0, deleted: 0 }; diff --git a/src/db.ts b/src/db.ts index 361517f..561beb4 100644 --- a/src/db.ts +++ b/src/db.ts @@ -2,7 +2,7 @@ import { Database } from "bun:sqlite"; import { join, dirname } from "path"; import { mkdirSync } from "fs"; -const SCHEMA_VERSION = 5; +const SCHEMA_VERSION = 6; const MIGRATIONS: string[] = [ ` @@ -153,6 +153,32 @@ const MIGRATIONS: string[] = [ ALTER TABLE distillations ADD COLUMN archived INTEGER NOT NULL DEFAULT 0; CREATE INDEX IF NOT EXISTS idx_distillation_archived ON distillations(archived); `, + ` + -- Version 6: Compound indexes for common multi-column query patterns. + -- Almost every query filters on (project_id, session_id) but only single-column + -- indexes existed, forcing SQLite to pick one and scan for the rest. + + -- temporal_messages: covers bySession, search-LIKE fallback, count, undistilledCount + CREATE INDEX IF NOT EXISTS idx_temporal_project_session ON temporal_messages(project_id, session_id); + -- temporal_messages: covers undistilled() and undistilledCount() with distilled filter + CREATE INDEX IF NOT EXISTS idx_temporal_project_session_distilled ON temporal_messages(project_id, session_id, distilled); + -- temporal_messages: covers pruning TTL pass and size-cap pass (distilled=1 ordered by created_at) + CREATE INDEX IF NOT EXISTS idx_temporal_project_distilled_created ON temporal_messages(project_id, distilled, created_at); + + -- distillations: covers loadForSession, latestObservations, searchDistillations, resetOrphans + CREATE INDEX IF NOT EXISTS idx_distillation_project_session ON distillations(project_id, session_id); + -- distillations: covers gen0Count, loadGen0, gradient prefix loading (archived filter) + CREATE INDEX IF NOT EXISTS idx_distillation_project_session_gen_archived ON distillations(project_id, session_id, generation, archived); + + -- Drop redundant single-column indexes that are now left-prefixes of compound indexes. + -- idx_temporal_project is a prefix of idx_temporal_project_session. + -- idx_distillation_project is a prefix of idx_distillation_project_session. + -- idx_temporal_distilled is a prefix of no compound index but is low-selectivity (0/1) + -- and all queries that use it also filter on project_id — covered by the new compounds. + DROP INDEX IF EXISTS idx_temporal_project; + DROP INDEX IF EXISTS idx_temporal_distilled; + DROP INDEX IF EXISTS idx_distillation_project; + `, ]; function dataDir() { diff --git a/src/distillation.ts b/src/distillation.ts index 453e5bf..ad4d924 100644 --- a/src/distillation.ts +++ b/src/distillation.ts @@ -388,6 +388,11 @@ async function distillSegment(input: { path: { id: workerID }, query: { limit: 2 }, }); + // Rotate worker session so the next call starts fresh — prevents + // accumulating multiple assistant messages with reasoning/thinking parts, + // which providers reject ("Multiple reasoning_opaque values"). + workerSessions.delete(input.sessionID); + const last = msgs.data?.at(-1); if (!last || last.info.role !== "assistant") return null; @@ -438,6 +443,9 @@ async function metaDistill(input: { path: { id: workerID }, query: { limit: 2 }, }); + // Rotate worker session — see distillSegment() comment. + workerSessions.delete(input.sessionID); + const last = msgs.data?.at(-1); if (!last || last.info.role !== "assistant") return null; diff --git a/src/reflect.ts b/src/reflect.ts index c37fd2c..5addfac 100644 --- a/src/reflect.ts +++ b/src/reflect.ts @@ -1,6 +1,7 @@ import { tool } from "@opencode-ai/plugin/tool"; import * as temporal from "./temporal"; import * as ltm from "./ltm"; +import * as log from "./log"; import { db, ensureProject } from "./db"; import { serialize, inline, h, p, ul, lip, liph, t, root } from "./markdown"; @@ -114,34 +115,46 @@ export function createRecallTool(projectPath: string, knowledgeEnabled = true): const scope = args.scope ?? "all"; const sid = context.sessionID; - const temporalResults = - scope === "knowledge" - ? [] - : temporal.search({ - projectPath, - query: args.query, - sessionID: scope === "session" ? sid : undefined, - limit: 10, - }); + let temporalResults: temporal.TemporalMessage[] = []; + if (scope !== "knowledge") { + try { + temporalResults = temporal.search({ + projectPath, + query: args.query, + sessionID: scope === "session" ? sid : undefined, + limit: 10, + }); + } catch (err) { + log.error("recall: temporal search failed:", err); + } + } - const distillationResults = - scope === "knowledge" - ? [] - : searchDistillations({ - projectPath, - query: args.query, - sessionID: scope === "session" ? sid : undefined, - limit: 5, - }); + let distillationResults: Distillation[] = []; + if (scope !== "knowledge") { + try { + distillationResults = searchDistillations({ + projectPath, + query: args.query, + sessionID: scope === "session" ? sid : undefined, + limit: 5, + }); + } catch (err) { + log.error("recall: distillation search failed:", err); + } + } - const knowledgeResults = - !knowledgeEnabled || scope === "session" - ? [] - : ltm.search({ - query: args.query, - projectPath, - limit: 10, - }); + let knowledgeResults: ltm.KnowledgeEntry[] = []; + if (knowledgeEnabled && scope !== "session") { + try { + knowledgeResults = ltm.search({ + query: args.query, + projectPath, + limit: 10, + }); + } catch (err) { + log.error("recall: knowledge search failed:", err); + } + } return formatResults({ temporalResults, diff --git a/test/db.test.ts b/test/db.test.ts index aab0e2c..1d3a61e 100644 --- a/test/db.test.ts +++ b/test/db.test.ts @@ -21,7 +21,24 @@ describe("db", () => { const row = db().query("SELECT version FROM schema_version").get() as { version: number; }; - expect(row.version).toBe(5); + expect(row.version).toBe(6); + }); + + test("compound indexes exist for common query patterns", () => { + const indexes = db() + .query("SELECT name FROM sqlite_master WHERE type='index' AND name LIKE 'idx_%' ORDER BY name") + .all() as Array<{ name: string }>; + const names = indexes.map((i) => i.name); + // Compound indexes added in version 6 + expect(names).toContain("idx_temporal_project_session"); + expect(names).toContain("idx_temporal_project_session_distilled"); + expect(names).toContain("idx_temporal_project_distilled_created"); + expect(names).toContain("idx_distillation_project_session"); + expect(names).toContain("idx_distillation_project_session_gen_archived"); + // Redundant single-column indexes should be dropped + expect(names).not.toContain("idx_temporal_project"); + expect(names).not.toContain("idx_temporal_distilled"); + expect(names).not.toContain("idx_distillation_project"); }); test("ensureProject creates and returns id", () => {