Skip to content

feat: [ENG-2496] Daemon llmservice accumulator + real-time TaskHistor…#565

Merged
bao-byterover merged 1 commit intoproj/persis-task-historyfrom
feat/ENG-2496
Apr 27, 2026
Merged

feat: [ENG-2496] Daemon llmservice accumulator + real-time TaskHistor…#565
bao-byterover merged 1 commit intoproj/persis-task-historyfrom
feat/ENG-2496

Conversation

@bao-byterover
Copy link
Copy Markdown
Collaborator

No description provided.

@bao-byterover bao-byterover requested a review from leehpham as a code owner April 27, 2026 09:53
@bao-byterover bao-byterover merged commit b7a3f1a into proj/persis-task-history Apr 27, 2026
3 checks passed
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 27, 2026

Claude finished @bao-byterover's task in 3m 34s —— View job


PR Review — ENG-2496: Daemon LLM Accumulator + TaskHistoryHook

  • Gather context
  • Read changed files
  • Analyze correctness, security, and code quality
  • Post review

Score: 4 / 5 — Well-structured, thoroughly tested feature. A few correctness nits and one real bug worth fixing.


Overview

This PR adds two things: (1) an in-memory accumulator in TaskRouter that patches TaskInfo with live llmservice:* event data and flushes it on a 100ms throttle window, and (2) a TaskHistoryHook that persists each TaskInfo snapshot to ITaskHistoryStore at every lifecycle transition. The design is clean — the hook is decoupled via DIP, the throttle correctly uses unref() to avoid holding the event loop, and the test coverage is solid.


Issues

Bug

task-history-hook.ts:57onTaskError overwrites the original error code/name

async onTaskError(_taskId: string, errorMessage: string, task: TaskInfo): Promise<void> {
  await this.persist(task, {
    completedAt: Date.now(),
    error: {code: 'TASK_ERROR', message: errorMessage, name: 'TaskError'}, // ← always generic
    status: 'error',
  })
}

By the time onTaskError fires, task.error is already populated by TaskRouter.handleTaskError with the full TaskErrorData (which may carry AgentNotAvailableError, network codes, etc.). The explicit override replaces it with a stripped-down generic object. The statusShapeFromTaskInfo method already does the right thing — it uses task.error ?? fallback — so the simplest fix is to drop the explicit error key from the override and let statusShapeFromTaskInfo handle it from task.error:

async onTaskError(_taskId: string, _errorMessage: string, task: TaskInfo): Promise<void> {
  await this.persist(task, { completedAt: Date.now(), status: 'error' })
}

Fix this →


Suggestion

task-router.tsflushDirty uses sequential await across tasks and hooks

for (const taskId of ids) {
  for (const hook of this.lifecycleHooks) {
    // eslint-disable-next-line no-await-in-loop
    await hook.onTaskUpdate?.(task)
  }
}

The eslint-disable comment signals this was deliberate, but sequential flushing means a slow onTaskUpdate on task A delays the flush for task B. This is benign for the current TaskHistoryHook (fast I/O stub), but it's a footgun if another hook is added. Consider Promise.allSettled per-task, similar to how notifyHooksCompleted already works:

await Promise.allSettled(ids.flatMap(taskId => {
  const task = this.tasks.get(taskId)
  if (!task) return []
  return this.lifecycleHooks.map(async (hook) => {
    try { await hook.onTaskUpdate?.(task) } catch (e) { /* log */ }
  })
}))

task-history-hook.ts — Case labels in accumulateLlmEvent are out of order (C → A → B)

if (last === undefined) {
  // Case C: empty array — push fresh body entry.
} else if (last.isThinking === true && (last.content ?? '') === '') {
  // Case A: flip the empty isThinking marker to body.
} else {
  // Case B: append to existing body.
}

The labels are alphabetically non-sequential (C, A, B) which is confusing on first read. Re-labelling as A/B/C in top-down order, or switching to descriptive labels (empty-array, flip-thinking-marker, append), costs nothing and improves legibility.


task-router.ts — Unbounded reasoningContents growth

reasoningContents and toolCalls on TaskInfo grow without any cap. For long multi-step agents with heavy tool use or extended thinking, in-memory pressure can accumulate within a single task lifetime. Logging a warning when either array exceeds a threshold (e.g. 500 entries) or soft-capping would be low-overhead insurance.


i-task-lifecycle-hook.ts:28 — Comment says "throttled flush (~100ms)" but the constant lives only in TaskRouter

The 100ms figure in the interface comment will drift if FLUSH_INTERVAL_MS is ever changed. Consider referencing the constant symbolically in the comment, or moving the comment to the TaskRouter constant where it can stay adjacent to the value.


Nitpick

task-history-hook.ts:82onTaskCreate always passes {status: 'created'} via override, making statusShapeFromTaskInfo redundant for this path

The default branch of statusShapeFromTaskInfo already returns {status: 'created'}. Both are correct, but the explicit override makes it slightly harder to follow which source wins. A comment // status 'created' matches the default branch would clarify this.


Positive Highlights

  • suggestion Throttle + unref(): markDirty/flushDirty + flushTimer.unref?.() is exactly the right pattern for a daemon-friendly coalescing flush. No daemon shutdown is ever blocked by a pending flush.
  • praise Test coverage: The test file covers the full accumulator state machine (thinking → flip → append, multi-response overwrite, toolCall lifecycle, unknown task ignored, burst coalescing, throttle boundary). This is production-grade.
  • praise DIP via getStore factory: TaskHistoryHook never imports FileTaskHistoryStore — the factory injects the store, keeping the hook testable in isolation.
  • suggestion resolveActiveProvider fail-open: The safeResolveActiveProvider wrapper ensures provider resolution never blocks dispatch. The conditional-await comment explaining the microtask timing is appreciated.

@bao-byterover bao-byterover deleted the feat/ENG-2496 branch April 27, 2026 09:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant