Conversation
📝 WalkthroughWalkthroughThreaded AbortSignal and rate-limit queueing through LLM/provider, presenter, compaction, and tool flows; added executeWithRateLimit + queue snapshot hooks; UI renders ephemeral compact rate-limit streaming blocks; types and tests updated for abort and rate-limit behaviors. Changes
Sequence Diagram(s)sequenceDiagram
participant UI as Client/UI
participant Presenter as DeepChatAgentPresenter
participant RateLimit as RateLimitManager
participant LLM as LLMProvider
participant Renderer as Renderer
UI->>Presenter: runStreamForMessage(...)
Presenter->>RateLimit: executeWithRateLimit(providerId, { signal, onQueued })
alt queued
RateLimit-->>Presenter: onQueued(snapshot)
Presenter->>Renderer: emit stream:response (action_type: 'rate_limit')
Renderer->>UI: show compact rate-limit indicator
RateLimit->>RateLimit: wait for QPS window
end
RateLimit-->>Presenter: queued promise resolves
Presenter->>LLM: generateText / coreStream (with signal)
LLM-->>Presenter: streaming events
Presenter->>Renderer: emit stream:response (clear rate_limit / stream data)
Renderer->>UI: update/hide rate-limit indicator
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
test/main/presenter/toolPresenter/agentTools/agentToolManagerRead.test.ts (1)
160-199:⚠️ Potential issue | 🔴 CriticalAssert
executeWithRateLimitreceives cancellation options, not onlyproviderId, and pass the signal through the call chain.On Lines 160 and 198, the tests validate only the first argument. This allows a regression where queued vision requests become non-abortable.
More critically, the production code at
src/main/presenter/toolPresenter/agentTools/agentToolManager.ts:1240does not forward thesignalfromcallTool()options throughcallFileSystemTool()andreadImageWithVisionFallback()toexecuteWithRateLimit(). The signal is available in thecallToolmethod signature (line 376) but is dropped during routing. This prevents vision fallback operations from being cancellable.Fix requires:
- Thread
options.signalfromcallTool→callFileSystemTool→readImageWithVisionFallback- Pass
{ signal: options?.signal }toexecuteWithRateLimit()at line 1240- Update test assertions to validate signal is forwarded:
-expect(llmProviderPresenter.executeWithRateLimit).toHaveBeenCalledWith('openai') +expect(llmProviderPresenter.executeWithRateLimit).toHaveBeenCalledWith( + 'openai', + expect.objectContaining({ signal: expect.anything() }) +)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/main/presenter/toolPresenter/agentTools/agentToolManagerRead.test.ts` around lines 160 - 199, The call chain drops the abort signal so vision fallback requests aren’t cancellable; thread options.signal from callTool through callFileSystemTool into readImageWithVisionFallback and ensure executeWithRateLimit is invoked with an options object that includes { signal: options?.signal } (update the call at executeWithRateLimit site and the signatures/forwarding in callFileSystemTool and readImageWithVisionFallback accordingly), then update the tests (agentToolManagerRead.test) to assert executeWithRateLimit was called with an options object containing the signal rather than only the providerId.
🧹 Nitpick comments (4)
src/main/presenter/llmProviderPresenter/types.ts (1)
8-19: Prefer a single canonical queue snapshot type across shared/main layers.
RateLimitQueueSnapshotis now defined here whileLLMProviderPresenterimports the same concept from@shared/presenter. Consider reusing one source to avoid silent type drift over time.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/presenter/llmProviderPresenter/types.ts` around lines 8 - 19, There are duplicate type definitions for RateLimitQueueSnapshot in this module; remove the local RateLimitQueueSnapshot declaration and have ExecuteWithRateLimitOptions (and any local references) import and use the canonical RateLimitQueueSnapshot from the shared presenter types (the same one LLMProviderPresenter currently imports) so all code uses a single source of truth; update the import list to pull RateLimitQueueSnapshot from `@shared/presenter` and ensure ExecuteWithRateLimitOptions.signal and onQueued remain unchanged except for using the imported type.src/renderer/src/pages/ChatPage.vue (1)
137-137: Extract the rate-limit prefix to a shared constant.
'__rate_limit__:'is now a cross-process contract between main, renderer, and the UI message store. Keeping it duplicated makes a future rename easy to miss and will silently break ephemeral-message hydration.As per coding guidelines,
src/shared/**: "Shared TypeScript types and utilities should be placed insrc/shared/."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/src/pages/ChatPage.vue` at line 137, The RATE_LIMIT_STREAM_MESSAGE_PREFIX constant ('__rate_limit__:') is used across processes and must be moved to a shared module: create and export a constant (e.g., RATE_LIMIT_STREAM_MESSAGE_PREFIX) under src/shared (or your shared constants module), replace the inline declaration in ChatPage.vue with an import of that shared constant, and update any other places (main, renderer, UI message store) that currently duplicate the string to import this new shared symbol so all processes use the single source of truth.test/main/presenter/deepchatAgentPresenter/deepchatAgentPresenter.test.ts (1)
943-957: Assert the emitted rate-limit block includes atimestamp.
src/renderer/src/components/message/MessageBlockAction.vuenow derives the compact wait text fromblock.timestamp, so this test should pin that field too. Otherwise the renderer can regress to broken elapsed-time text while this suite still passes.Strengthen the assertion
expect(rateLimitShow).toMatchObject({ conversationId: 's1', blocks: [ expect.objectContaining({ type: 'action', action_type: 'rate_limit', status: 'pending', + timestamp: expect.any(Number), extra: expect.objectContaining({🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/main/presenter/deepchatAgentPresenter/deepchatAgentPresenter.test.ts` around lines 943 - 957, The test's assertion for the emitted rate-limit block is missing the new timestamp field used by MessageBlockAction.vue; update the expectation for rateLimitShow (the object under test in deepchatAgentPresenter.test.ts) to include a timestamp (e.g., assert the block.extra or block itself contains a timestamp using expect.any(Number) or a pinned numeric value) so the emitted rate-limit block matches the renderer's requirement and prevents elapsed-time regressions.test/main/presenter/llmProviderPresenter/rateLimitManager.test.ts (1)
1-137: Move this suite under the mirroredmanagers/test path.The source under test lives in
src/main/presenter/llmProviderPresenter/managers/, so keeping this file undertest/main/presenter/llmProviderPresenter/managers/will match the repo’s expected test layout and make related coverage easier to find.Based on learnings,
test/**/*.{test,spec}.ts: Vitest test suites should mirror the source structure undertest/main/**andtest/renderer/**.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/main/presenter/llmProviderPresenter/rateLimitManager.test.ts` around lines 1 - 137, The test suite for RateLimitManager is located outside the mirrored "managers" test folder; move the test file (rateLimitManager.test.ts) so its test path mirrors the source folder that contains the RateLimitManager class (i.e., place it alongside other managers tests), update any imports if needed, and ensure the describe('RateLimitManager', ...) suite remains intact so coverage and organization match the source layout.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/main/presenter/deepchatAgentPresenter/compactionService.ts`:
- Line 707: The compaction flow needs to accept and forward an AbortSignal so
queued rate-limited compaction can be cancelled: add an optional signal
parameter to summarizeBlocks, summarizeChunkGroups, generateRollingSummary,
applyCompaction, prepareForNextUserTurn, and prepareForResumeTurn, thread that
signal through each call site, and pass it into
llmProviderPresenter.executeWithRateLimit(model.providerId, { signal }) (or the
equivalent executeWithRateLimit overload that accepts a signal) so the existing
abort listeners and queue cleanup are used; ensure every call site in the chain
forwards the received signal rather than dropping it.
In `@src/main/presenter/toolPresenter/agentTools/agentToolManager.ts`:
- Line 1240: The await of
getLlmProviderPresenter().executeWithRateLimit(visionTarget.providerId) is not
cancellable — thread an AbortSignal through the call chain so long waits can be
aborted: add an optional signal param to public entry points (e.g.,
callTool(..., options?), pass options?.signal into callFileSystemTool and into
readImageWithVisionFallback(..., signal?), and update
getLlmProviderPresenter().executeWithRateLimit to accept and respect that signal
so the awaited call can be aborted when the user cancels. Ensure the signal is
propagated from the top-level caller down to the executeWithRateLimit invocation
and that cancellation is handled cleanly.
In `@src/renderer/src/i18n/zh-CN/chat.json`:
- Line 86: Update the translation value for the key rateLimitCompactLoading in
chat.json to convey duration rather than ordinal phrasing: replace "限速中(第
{seconds} 秒)" with a duration-focused string such as "限速中({seconds} 秒)" or
"限速中(已等待 {seconds} 秒)" so it matches the semantics of other locales.
In `@test/main/presenter/deepchatAgentPresenter/deepchatAgentPresenter.test.ts`:
- Around line 967-997: The test uses Promise.withResolvers (variable queued)
which is unsupported on Node 20.x; replace it with the portable resolver pattern
by creating a promise along with captured resolve/reject functions (e.g., let
queuedResolve, queuedReject; const queued = { promise: new Promise((res, rej) =>
{ queuedResolve = res; queuedReject = rej }), resolve: queuedResolve, reject:
queuedReject } or equivalent), keep the rest of the logic in
llmProvider.executeWithRateLimit the same (call queued.resolve() where currently
used and await queued.promise at the corresponding await site), and ensure any
rejects use queued.reject to mirror the original behavior.
---
Outside diff comments:
In `@test/main/presenter/toolPresenter/agentTools/agentToolManagerRead.test.ts`:
- Around line 160-199: The call chain drops the abort signal so vision fallback
requests aren’t cancellable; thread options.signal from callTool through
callFileSystemTool into readImageWithVisionFallback and ensure
executeWithRateLimit is invoked with an options object that includes { signal:
options?.signal } (update the call at executeWithRateLimit site and the
signatures/forwarding in callFileSystemTool and readImageWithVisionFallback
accordingly), then update the tests (agentToolManagerRead.test) to assert
executeWithRateLimit was called with an options object containing the signal
rather than only the providerId.
---
Nitpick comments:
In `@src/main/presenter/llmProviderPresenter/types.ts`:
- Around line 8-19: There are duplicate type definitions for
RateLimitQueueSnapshot in this module; remove the local RateLimitQueueSnapshot
declaration and have ExecuteWithRateLimitOptions (and any local references)
import and use the canonical RateLimitQueueSnapshot from the shared presenter
types (the same one LLMProviderPresenter currently imports) so all code uses a
single source of truth; update the import list to pull RateLimitQueueSnapshot
from `@shared/presenter` and ensure ExecuteWithRateLimitOptions.signal and
onQueued remain unchanged except for using the imported type.
In `@src/renderer/src/pages/ChatPage.vue`:
- Line 137: The RATE_LIMIT_STREAM_MESSAGE_PREFIX constant ('__rate_limit__:') is
used across processes and must be moved to a shared module: create and export a
constant (e.g., RATE_LIMIT_STREAM_MESSAGE_PREFIX) under src/shared (or your
shared constants module), replace the inline declaration in ChatPage.vue with an
import of that shared constant, and update any other places (main, renderer, UI
message store) that currently duplicate the string to import this new shared
symbol so all processes use the single source of truth.
In `@test/main/presenter/deepchatAgentPresenter/deepchatAgentPresenter.test.ts`:
- Around line 943-957: The test's assertion for the emitted rate-limit block is
missing the new timestamp field used by MessageBlockAction.vue; update the
expectation for rateLimitShow (the object under test in
deepchatAgentPresenter.test.ts) to include a timestamp (e.g., assert the
block.extra or block itself contains a timestamp using expect.any(Number) or a
pinned numeric value) so the emitted rate-limit block matches the renderer's
requirement and prevents elapsed-time regressions.
In `@test/main/presenter/llmProviderPresenter/rateLimitManager.test.ts`:
- Around line 1-137: The test suite for RateLimitManager is located outside the
mirrored "managers" test folder; move the test file (rateLimitManager.test.ts)
so its test path mirrors the source folder that contains the RateLimitManager
class (i.e., place it alongside other managers tests), update any imports if
needed, and ensure the describe('RateLimitManager', ...) suite remains intact so
coverage and organization match the source layout.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3ddd724f-d61c-4d90-9af5-45cf65d1a738
📒 Files selected for processing (43)
src/main/presenter/deepchatAgentPresenter/compactionService.tssrc/main/presenter/deepchatAgentPresenter/index.tssrc/main/presenter/deepchatAgentPresenter/process.tssrc/main/presenter/index.tssrc/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/llmProviderPresenter/managers/rateLimitManager.tssrc/main/presenter/llmProviderPresenter/types.tssrc/main/presenter/toolPresenter/agentTools/agentToolManager.tssrc/main/presenter/toolPresenter/runtimePorts.tssrc/renderer/src/components/chat/MessageList.vuesrc/renderer/src/components/message/MessageBlockAction.vuesrc/renderer/src/i18n/da-DK/chat.jsonsrc/renderer/src/i18n/en-US/chat.jsonsrc/renderer/src/i18n/fa-IR/chat.jsonsrc/renderer/src/i18n/fr-FR/chat.jsonsrc/renderer/src/i18n/he-IL/chat.jsonsrc/renderer/src/i18n/ja-JP/chat.jsonsrc/renderer/src/i18n/ko-KR/chat.jsonsrc/renderer/src/i18n/pt-BR/chat.jsonsrc/renderer/src/i18n/ru-RU/chat.jsonsrc/renderer/src/i18n/zh-CN/chat.jsonsrc/renderer/src/i18n/zh-HK/chat.jsonsrc/renderer/src/i18n/zh-TW/chat.jsonsrc/renderer/src/pages/ChatPage.vuesrc/renderer/src/stores/ui/message.tssrc/shared/types/agent-interface.d.tssrc/shared/types/presenters/index.d.tssrc/shared/types/presenters/legacy.presenters.d.tssrc/shared/types/presenters/llmprovider.presenter.d.tstest/main/presenter/deepchatAgentPresenter/compactionService.test.tstest/main/presenter/deepchatAgentPresenter/deepchatAgentPresenter.test.tstest/main/presenter/deepchatAgentPresenter/process.test.tstest/main/presenter/llmProviderPresenter/rateLimitManager.test.tstest/main/presenter/mcpClient.test.tstest/main/presenter/toolPresenter/agentTools/agentToolManagerRead.test.tstest/main/presenter/toolPresenter/agentTools/agentToolManagerSettings.test.tstest/main/presenter/toolPresenter/agentTools/agentToolManagerSkillAccess.test.tstest/main/presenter/toolPresenter/agentTools/subagentOrchestratorTool.test.tstest/main/presenter/toolPresenter/toolPresenter.test.tstest/renderer/components/ChatPage.test.tstest/renderer/components/MessageList.test.tstest/renderer/components/message/MessageBlockBasics.test.tstest/renderer/stores/messageStore.test.ts
src/main/presenter/toolPresenter/agentTools/agentToolManager.ts
Outdated
Show resolved
Hide resolved
| const queued = Promise.withResolvers<void>() | ||
| llmProvider.executeWithRateLimit.mockImplementation( | ||
| ( | ||
| _providerId: string, | ||
| options?: { signal?: AbortSignal; onQueued?: (snapshot: any) => void } | ||
| ) => | ||
| new Promise<void>((resolve, reject) => { | ||
| options?.onQueued?.({ | ||
| providerId: 'openai', | ||
| qpsLimit: 1, | ||
| currentQps: 1, | ||
| queueLength: 1, | ||
| estimatedWaitTime: 1000 | ||
| }) | ||
| queued.resolve() | ||
|
|
||
| if (options?.signal?.aborted) { | ||
| reject(abortError) | ||
| return | ||
| } | ||
|
|
||
| options?.signal?.addEventListener( | ||
| 'abort', | ||
| () => { | ||
| reject(abortError) | ||
| }, | ||
| { once: true } | ||
| ) | ||
|
|
||
| void resolve | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo '== package.json runtime hints =='
jq '{engines, packageManager, dependencies: {electron: .dependencies.electron}, devDependencies: {electron: .devDependencies.electron, "@types/node": .devDependencies["@types/node"]}}' package.json 2>/dev/null || true
echo
echo '== CI / setup-node hints =='
rg -n 'node-version|setup-node|volta|engines|electron' package.json .github/workflows 2>/dev/null || true
echo
echo '== Promise.withResolvers usages =='
rg -n '\bPromise\.withResolvers\s*\(' --type tsRepository: ThinkInAIXYZ/deepchat
Length of output: 4177
🏁 Script executed:
# Check if the test file exists and examine lines 967-997
fd -t f 'deepchatAgentPresenter.test.ts' --exec wc -l {}Repository: ThinkInAIXYZ/deepchat
Length of output: 145
🏁 Script executed:
# Read the specific section to verify if Promise.withResolvers is actually there
sed -n '965,1000p' test/main/presenter/deepchatAgentPresenter/deepchatAgentPresenter.test.ts 2>/dev/null || echo "File not found or out of range"Repository: ThinkInAIXYZ/deepchat
Length of output: 1125
🏁 Script executed:
# Also check around line 1044-1045 mentioned in "Also applies to"
sed -n '1040,1050p' test/main/presenter/deepchatAgentPresenter/deepchatAgentPresenter.test.ts 2>/dev/null || echo "File not found or out of range"Repository: ThinkInAIXYZ/deepchat
Length of output: 462
Replace Promise.withResolvers with portable equivalent—unsupported on Node 20.x.
Code uses Promise.withResolvers<void>() at line 967 (and corresponding await queued.promise at line 1044), but package.json declares engines.node ≥ 20.19.0. The Promise.withResolvers API only shipped in Node 22.0.0, breaking the suite on the minimum supported version. CI masks the issue by running Node 22.13.1.
Replace with the portable pattern:
Portable replacement
- const queued = Promise.withResolvers<void>()
+ let resolveQueued!: () => void
+ const queued = new Promise<void>((resolve) => {
+ resolveQueued = resolve
+ })
llmProvider.executeWithRateLimit.mockImplementation(
(
_providerId: string,
options?: { signal?: AbortSignal; onQueued?: (snapshot: any) => void }
) =>
new Promise<void>((resolve, reject) => {
options?.onQueued?.({
providerId: 'openai',
qpsLimit: 1,
currentQps: 1,
queueLength: 1,
estimatedWaitTime: 1000
})
- queued.resolve()
+ resolveQueued()
if (options?.signal?.aborted) {
reject(abortError)
return
}
@@
-
- void resolve
+ void resolve
})
)
@@
- await queued.promise
+ await queuedAlso applies to: 1044–1045
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/main/presenter/deepchatAgentPresenter/deepchatAgentPresenter.test.ts`
around lines 967 - 997, The test uses Promise.withResolvers (variable queued)
which is unsupported on Node 20.x; replace it with the portable resolver pattern
by creating a promise along with captured resolve/reject functions (e.g., let
queuedResolve, queuedReject; const queued = { promise: new Promise((res, rej) =>
{ queuedResolve = res; queuedReject = rej }), resolve: queuedResolve, reject:
queuedReject } or equivalent), keep the rest of the logic in
llmProvider.executeWithRateLimit the same (call queued.resolve() where currently
used and await queued.promise at the corresponding await site), and ensure any
rejects use queued.reject to mirror the original behavior.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
test/main/presenter/deepchatAgentPresenter/deepchatAgentPresenter.test.ts (2)
969-981: UnusedqueuedRejectcan be removed.The
queuedRejectvariable andqueued.rejectproperty are declared but never used in this test. Since the test only needs to synchronize onqueued.resolve(), the reject path can be removed for clarity.♻️ Suggested simplification
- let queuedResolve!: (value?: void | PromiseLike<void>) => void - let queuedReject!: (reason?: unknown) => void - const queued = { - promise: new Promise<void>((resolve, reject) => { - queuedResolve = resolve - queuedReject = reject - }), - resolve: queuedResolve, - reject: queuedReject - } + let queuedResolve!: () => void + const queuedPromise = new Promise<void>((resolve) => { + queuedResolve = resolve + })Then update line 995 to call
queuedResolve()directly and line 1059 toawait queuedPromise.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/main/presenter/deepchatAgentPresenter/deepchatAgentPresenter.test.ts` around lines 969 - 981, Remove the unused reject path from the queued helper in the test: delete the declaration and property for queuedReject/queued.reject and only create queued.promise with a single resolver (queuedResolve). Update uses to call queuedResolve() directly where the test triggers the queued completion and replace any await queued.reject-related awaits with awaiting queued.promise (e.g., change the call at line ~995 to queuedResolve() and at line ~1059 to await queued.promise) so the test only syncs on the resolve path.
1010-1010: Consider adding a comment explaining the intentionally pending promise.The
void resolvepattern silences the unused variable warning, but a brief comment would help future readers understand that this promise is designed to remain pending until aborted.+ // Promise intentionally never resolves - only rejects on abort void resolve🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/main/presenter/deepchatAgentPresenter/deepchatAgentPresenter.test.ts` at line 1010, Add a brief inline comment explaining that the pending promise is intentional where the test uses the "void resolve" pattern (the unused variable named resolve) in deepchatAgentPresenter.test.ts, clarifying that the promise is meant to remain pending until an abort occurs; place the comment immediately above or beside the "void resolve" line and mention that it intentionally silences the unused-variable warning while keeping the promise unresolved for the abort test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/main/presenter/deepchatAgentPresenter/index.ts`:
- Around line 1955-1968: The abort-handling block only checks
isAbortError(error) so it misses cases where a prior AbortSignal is already
aborted; update the condition to also treat an aborted signal as an abort (e.g.
change the if to check this.isAbortError(error) ||
this.runtimeState.get(sessionId)?.signal?.aborted) so the same cleanup path
(buildTerminalErrorBlocks, messageStore.setMessageError, emitMessageRefresh,
dispatchTerminalHooks, setSessionStatus) runs when the signal has been aborted;
apply the same change to the similar catch at the other location (lines
~3612-3620).
- Around line 1484-1494: After executeWithRateLimit(...) completes and before
calling provider.coreStream(...), re-check the abortController.signal to avoid
starting a provider request if the user cancelled during the rate-limit wait: if
abortController.signal.aborted then clear the rate-limit waiting UI
(clearRateLimitWaitingMessage(sessionId, rateLimitMessageId) if
queuedForRateLimit) and abort the flow (throw or return) instead of proceeding
to provider.coreStream; ensure the same abortController.signal is passed into
provider.coreStream when allowed, and keep handling of queuedForRateLimit and
emitRateLimitWaitingMessage(sessionId, rateLimitMessageId) consistent.
In `@src/main/presenter/toolPresenter/agentTools/agentToolManager.ts`:
- Around line 1224-1229: The vision-target resolution drops the abort signal;
update resolveVisionTargetForConversation to accept an optional signal parameter
and pass that signal into the resolveSessionVisionTarget call (include the
signal and existing logLabel like `read:${conversationId}`), and update callers
such as readImageWithVisionFallback to forward their signal into
resolveVisionTargetForConversation so the lookup becomes cancellable; refer to
the functions resolveVisionTargetForConversation, readImageWithVisionFallback,
and resolveSessionVisionTarget when making the change.
---
Nitpick comments:
In `@test/main/presenter/deepchatAgentPresenter/deepchatAgentPresenter.test.ts`:
- Around line 969-981: Remove the unused reject path from the queued helper in
the test: delete the declaration and property for queuedReject/queued.reject and
only create queued.promise with a single resolver (queuedResolve). Update uses
to call queuedResolve() directly where the test triggers the queued completion
and replace any await queued.reject-related awaits with awaiting queued.promise
(e.g., change the call at line ~995 to queuedResolve() and at line ~1059 to
await queued.promise) so the test only syncs on the resolve path.
- Line 1010: Add a brief inline comment explaining that the pending promise is
intentional where the test uses the "void resolve" pattern (the unused variable
named resolve) in deepchatAgentPresenter.test.ts, clarifying that the promise is
meant to remain pending until an abort occurs; place the comment immediately
above or beside the "void resolve" line and mention that it intentionally
silences the unused-variable warning while keeping the promise unresolved for
the abort test.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b4918fea-a989-4648-9147-48fe76e200ce
📒 Files selected for processing (8)
src/main/presenter/deepchatAgentPresenter/compactionService.tssrc/main/presenter/deepchatAgentPresenter/index.tssrc/main/presenter/index.tssrc/main/presenter/toolPresenter/agentTools/agentToolManager.tssrc/renderer/src/i18n/zh-CN/chat.jsontest/main/presenter/deepchatAgentPresenter/compactionService.test.tstest/main/presenter/deepchatAgentPresenter/deepchatAgentPresenter.test.tstest/main/presenter/toolPresenter/agentTools/agentToolManagerRead.test.ts
✅ Files skipped from review due to trivial changes (1)
- src/renderer/src/i18n/zh-CN/chat.json
🚧 Files skipped from review as they are similar to previous changes (3)
- test/main/presenter/deepchatAgentPresenter/compactionService.test.ts
- src/main/presenter/index.ts
- test/main/presenter/toolPresenter/agentTools/agentToolManagerRead.test.ts
| await llmProviderPresenter.executeWithRateLimit(state.providerId, { | ||
| signal: abortController.signal, | ||
| onQueued: (snapshot) => { | ||
| queuedForRateLimit = true | ||
| emitRateLimitWaitingMessage(sessionId, rateLimitMessageId, snapshot) | ||
| } | ||
| }) | ||
| if (queuedForRateLimit) { | ||
| clearRateLimitWaitingMessage(sessionId, rateLimitMessageId) | ||
| queuedForRateLimit = false | ||
| } |
There was a problem hiding this comment.
Re-check cancellation after the rate-limit wait.
After executeWithRateLimit(...) resolves, this path goes straight into provider.coreStream(...). If the user cancels in that handoff window, one more provider request can still start because the signal is not checked again here.
💡 Suggested fix
await llmProviderPresenter.executeWithRateLimit(state.providerId, {
signal: abortController.signal,
onQueued: (snapshot) => {
queuedForRateLimit = true
emitRateLimitWaitingMessage(sessionId, rateLimitMessageId, snapshot)
}
})
+ this.throwIfAbortRequested(abortController.signal)
if (queuedForRateLimit) {
clearRateLimitWaitingMessage(sessionId, rateLimitMessageId)
queuedForRateLimit = false
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main/presenter/deepchatAgentPresenter/index.ts` around lines 1484 - 1494,
After executeWithRateLimit(...) completes and before calling
provider.coreStream(...), re-check the abortController.signal to avoid starting
a provider request if the user cancelled during the rate-limit wait: if
abortController.signal.aborted then clear the rate-limit waiting UI
(clearRateLimitWaitingMessage(sessionId, rateLimitMessageId) if
queuedForRateLimit) and abort the flow (throw or return) instead of proceeding
to provider.coreStream; ensure the same abortController.signal is passed into
provider.coreStream when allowed, and keep handling of queuedForRateLimit and
emitRateLimitWaitingMessage(sessionId, rateLimitMessageId) consistent.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/main/presenter/toolPresenter/agentTools/agentToolManager.ts (1)
1271-1293: Simplify conditional signal passing.The branching on
signalpresence is unnecessary since bothexecuteWithRateLimitandgenerateCompletionStandaloneaccept optional signal parameters. The{ signal }object can be passed unconditionally when signal is undefined—it will simply be ignored.♻️ Proposed simplification
const llmProviderPresenter = this.getLlmProviderPresenter() - if (signal) { - await llmProviderPresenter.executeWithRateLimit(visionTarget.providerId, { signal }) - } else { - await llmProviderPresenter.executeWithRateLimit(visionTarget.providerId) - } + await llmProviderPresenter.executeWithRateLimit(visionTarget.providerId, { signal }) throwIfAbortRequested(signal) - const response = signal - ? await llmProviderPresenter.generateCompletionStandalone( - visionTarget.providerId, - messages, - visionTarget.modelId, - modelConfig?.temperature ?? 0.2, - modelConfig?.maxTokens ?? 1200, - { signal } - ) - : await llmProviderPresenter.generateCompletionStandalone( - visionTarget.providerId, - messages, - visionTarget.modelId, - modelConfig?.temperature ?? 0.2, - modelConfig?.maxTokens ?? 1200 - ) + const response = await llmProviderPresenter.generateCompletionStandalone( + visionTarget.providerId, + messages, + visionTarget.modelId, + modelConfig?.temperature ?? 0.2, + modelConfig?.maxTokens ?? 1200, + signal ? { signal } : undefined + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/presenter/toolPresenter/agentTools/agentToolManager.ts` around lines 1271 - 1293, The code currently branches on the presence of signal before calling executeWithRateLimit and generateCompletionStandalone; simplify by always passing the optional signal parameter (e.g., pass { signal } unconditionally) to llmProviderPresenter.executeWithRateLimit and llmProviderPresenter.generateCompletionStandalone in agentToolManager.ts so you can remove the duplicated conditional calls and keep the single throwIfAbortRequested(signal) call between them; locate the calls to executeWithRateLimit and generateCompletionStandalone to consolidate into one path that always forwards the optional { signal } argument.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/main/presenter/toolPresenter/agentTools/agentToolManager.ts`:
- Around line 1271-1293: The code currently branches on the presence of signal
before calling executeWithRateLimit and generateCompletionStandalone; simplify
by always passing the optional signal parameter (e.g., pass { signal }
unconditionally) to llmProviderPresenter.executeWithRateLimit and
llmProviderPresenter.generateCompletionStandalone in agentToolManager.ts so you
can remove the duplicated conditional calls and keep the single
throwIfAbortRequested(signal) call between them; locate the calls to
executeWithRateLimit and generateCompletionStandalone to consolidate into one
path that always forwards the optional { signal } argument.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: af897533-be22-4afd-9686-0b4c93cdc46d
📒 Files selected for processing (4)
src/main/presenter/deepchatAgentPresenter/index.tssrc/main/presenter/toolPresenter/agentTools/agentToolManager.tstest/main/presenter/deepchatAgentPresenter/deepchatAgentPresenter.test.tstest/main/presenter/toolPresenter/agentTools/agentToolManagerRead.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- test/main/presenter/deepchatAgentPresenter/deepchatAgentPresenter.test.ts
fixed #1418
Summary by CodeRabbit
New Features
Bug Fixes
Tests