Integrate Supermemory for Long-term Agent Memory#529
Integrate Supermemory for Long-term Agent Memory#529ngoiyaeric wants to merge 3 commits intomainfrom
Conversation
Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
|
WalkthroughThreads server-derived Changes
Sequence DiagramsequenceDiagram
participant Client as Client (Browser)
participant Action as app/actions.tsx
participant Server as Server (getCurrentUserIdOnServer)
participant Agent as Agent (researcher/writer/...)
participant Utils as lib/utils/ai.ts (getModel)
participant Memory as Supermemory Tools
participant LLM as LLM Provider (Grok/Gemini/Bedrock/OpenAI)
Client->>Action: submit messages (+ chatId)
Action->>Server: fetch current userId
Server-->>Action: return userId or 'anonymous'
Action->>Agent: call agent(messages, userId, chatId)
Agent->>Utils: getModel(requireVision?, userId, chatId)
Utils->>Utils: resolve selected provider & init SDKs
alt SUPERMEMORY_API_KEY && userId
Utils->>Memory: wrap model with Supermemory (conversationId=chatId)
Memory-->>Utils: memory-augmented model
else
Utils->>LLM: select provider (Grok→Gemini→Bedrock→OpenAI)
LLM-->>Utils: model instance
end
Utils-->>Agent: return model instance
Agent->>LLM: stream generation / tool calls
LLM-->>Agent: streamed responses
Agent-->>Action: processed result
Action-->>Client: send response
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
The main issues are memory scoping and duplication: you’re integrating Supermemory both via withSupermemory(...) and via explicit searchMemories/addMemory tools, which risks double-writes and inconsistent retrieval. addMemory: 'always' is a privacy/security footgun because it can persist sensitive or irrelevant content by default. Tool initialization also overloads projectId with userId, which is likely incorrect and can cause fragmentation or unintended tenant behavior. Finally, chatId should be stabilized before use to avoid cross-conversation memory bleed.
Additional notes (2)
- Maintainability |
app/actions.tsx:105-105
chatId may be missing/unstable for conversation scoping
You’re passing aiState.get().chatId into multiple agents and into withSupermemory(... conversationId: chatId). If chatId can be undefined early in the lifecycle or can change across server actions, memory threads may be merged incorrectly or split unexpectedly.
This is especially problematic for long-term memory where stable conversation boundaries matter.
- Security |
lib/agents/researcher.tsx:32-36
The "Long-term Memory" prompt instructs the model to store "important business details" and preferences and implies this should happen broadly. Combined withaddMemory: 'always'inwithSupermemory(...), this can lead to over-collection (storing sensitive or irrelevant user content) and makes it hard to comply with data minimization expectations.
At minimum, the prompt should define what not to store (secrets, credentials, personal identifiers beyond what’s needed) and require user intent/benefit before writing memories.
Summary of changes
Summary
This PR integrates Supermemory as a long-term memory layer across the agent system.
Key updates
-
User identity propagation
app/actions.tsxnow resolves an authenticateduserId(fallback:anonymous) and threadsuserId+chatIdthrough agent calls liketaskManager,inquire,researcher,writer,querySuggestor, andresolutionSearch.
-
Model wrapper integration
lib/utils/index.ts:getModel(requireVision, userId?, chatId?)now wraps the selected base provider withwithSupermemory(...)whenSUPERMEMORY_API_KEYis available.
-
Tooling expansion
lib/agents/tools/index.tsx: conditionally addssearchMemoriesandaddMemoryviasupermemoryTools(...)whenSUPERMEMORY_API_KEYanduserIdare present.
-
Agent prompt changes
lib/agents/researcher.tsx: default system prompt gains Long-term Memory guidance (usesearchMemories/addMemory).
-
Dependencies
package.json: adds@supermemory/tools@^1.4.0.
lib/utils/index.ts
Outdated
| } | ||
|
|
||
| const openai = createOpenAI({ | ||
| apiKey: openaiApiKey, | ||
| }); | ||
| const model = bedrock(bedrockModelId, { | ||
| additionalModelRequestFields: { top_k: 350 }, | ||
| return openai('gpt-4o'); | ||
| } | ||
|
|
||
| const model = await getBaseModel(); | ||
|
|
||
| if (process.env.SUPERMEMORY_API_KEY && actualUserId) { | ||
| return withSupermemory(model as any, actualUserId, { | ||
| conversationId: chatId, | ||
| mode: 'full', | ||
| addMemory: 'always' | ||
| }); | ||
| return model; | ||
| } | ||
|
|
||
| const openai = createOpenAI({ | ||
| apiKey: openaiApiKey, | ||
| }); | ||
| return openai('gpt-4o'); | ||
| return model; |
There was a problem hiding this comment.
Avoid double-wrapping Supermemory (model-level + tool-level)
getModel(...) now wraps the language model with withSupermemory(...) (mode full, addMemory: 'always'). Separately, getTools(...) conditionally adds searchMemories/addMemory tools that likely talk to the same Supermemory backend.
This creates a real risk of:
- Duplicate memory writes (model wrapper auto-saves + explicit
addMemorytool calls) - Conflicting retrieval strategies (wrapper may inject memory context while tools also fetch it)
- Hard-to-debug behavior when memory is used implicitly and explicitly in the same turn
Given the researcher prompt explicitly instructs the model to call addMemory, leaving addMemory: 'always' on the wrapper is especially risky.
Suggestion
Pick one integration path:
Option A (recommended): tools-driven memory
- Keep
searchMemories/addMemorytools ingetTools(...) - Change model wrapper to retrieval-only / no auto-write, e.g.:
(Exact option names depend on the SDK, but the goal is: no automatic addMemory.)
return withSupermemory(model as any, actualUserId, { conversationId: chatId, mode: 'retrieve', addMemory: 'never', })
Option B: wrapper-driven memory
- Remove
searchMemories/addMemorytools fromgetTools(...) - Update the system prompt to not mention those tools.
Reply with "@CharlieHelps yes please" if you’d like me to add a commit implementing Option A and adjusting the prompt accordingly.
lib/utils/index.ts
Outdated
| export async function getModel(requireVision: boolean = false, userId?: string, chatId?: string) { | ||
| const actualUserId = userId || await getCurrentUserIdOnServer(); | ||
|
|
There was a problem hiding this comment.
Inconsistent/duplicated userId resolution paths
app/actions.tsx dynamically imports getCurrentUserIdOnServer() and computes userId, but getModel(...) also imports and calls getCurrentUserIdOnServer() when userId isn’t provided.
This duplication increases complexity and can lead to subtle inconsistencies (e.g., one path falls back to 'anonymous', the other might return undefined, resulting in different Supermemory behavior).
Suggestion
Make caller-supplied identity the single source of truth:
- Remove
getCurrentUserIdOnServer()fromgetModel(...) - Require callers (like
submit) to passuserIdexplicitly - If you need a fallback, normalize it in one place (e.g.
submit)
Reply with "@CharlieHelps yes please" if you want me to add a commit that removes server auth lookup from getModel and standardizes userId/chatId threading.
lib/utils/index.ts
Outdated
| const model = await getBaseModel(); | ||
|
|
||
| if (process.env.SUPERMEMORY_API_KEY && actualUserId) { | ||
| return withSupermemory(model as any, actualUserId, { | ||
| conversationId: chatId, | ||
| mode: 'full', | ||
| addMemory: 'always' | ||
| }); |
There was a problem hiding this comment.
withSupermemory(model as any, ...) introduces an explicit any cast, which defeats type-safety and can hide integration mistakes (e.g., incompatible model wrappers). Since CI guarantees tsc passes, this is likely being used to bypass a mismatched generic constraint.
This is risky in a core utility because it silently allows incompatible models to be wrapped and then fail at runtime in ways TypeScript could have prevented.
Suggestion
Avoid as any by typing the base model to the expected interface that withSupermemory accepts (or by narrowing via an explicit adapter). For example:
- Import the proper model type expected by
withSupermemoryfrom the SDK (if exported), and return that fromgetBaseModel(). - Or add a small wrapper function that constrains the model type once, instead of casting at the call site.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit that removes the any cast and aligns types with withSupermemory.
| if (process.env.SUPERMEMORY_API_KEY && userId) { | ||
| const memoryTools = supermemoryTools(process.env.SUPERMEMORY_API_KEY, { | ||
| projectId: userId | ||
| }) | ||
| tools.searchMemories = memoryTools.searchMemories | ||
| tools.addMemory = memoryTools.addMemory | ||
| } |
There was a problem hiding this comment.
projectId should not be the userId
supermemoryTools(..., { projectId: userId }) makes every user their own “project”. That’s usually not what projectId means and can cause:
- unexpected isolation/fragmentation in Supermemory
- potential unbounded project creation
- harder admin/analytics and migration
Typically:
projectIdshould be a stable application/workspace identifier (e.g. env var)- user scoping should be done via
userId/conversationId/namespacedepending on the SDK
Right now, you’re also passing actualUserId to withSupermemory(...), so there’s a high chance these two layers are scoping memory differently.
Suggestion
Introduce an app-level project id and keep user scoping separate, e.g.:
const projectId = process.env.SUPERMEMORY_PROJECT_ID
if (!projectId) throw new Error('SUPERMEMORY_PROJECT_ID is required when enabling Supermemory tools')
const memoryTools = supermemoryTools(process.env.SUPERMEMORY_API_KEY, {
projectId,
// if the SDK supports it, pass userId here instead of overloading projectId
userId,
})If the SDK doesn’t support a userId option, consider namespacing via conversationId/tags or switch to wrapper-only integration so scoping is consistent.
Reply with "@CharlieHelps yes please" if you’d like me to add a commit that introduces SUPERMEMORY_PROJECT_ID and refactors tool initialization accordingly.
app/actions.tsx
Outdated
| const { getCurrentUserIdOnServer } = await import('@/lib/auth/get-current-user') | ||
| const actualUserId = await getCurrentUserIdOnServer() | ||
| const userId = actualUserId || 'anonymous' | ||
|
|
There was a problem hiding this comment.
submit() sets userId = actualUserId || 'anonymous' and then passes that userId into memory-enabled paths. If SUPERMEMORY_API_KEY is set, unauthenticated users will share the same 'anonymous' identity, causing cross-session memory leakage between different anonymous visitors (and potentially across devices).
Even though tools are gated on userId presence, 'anonymous' is still a truthy userId and would enable the tools if passed through.
Suggestion
Do not use a shared sentinel value for identity in any memory-backed feature.
Options:
- If unauthenticated: pass
undefinedforuserId(disables memory/tools) OR - Generate a per-session/per-device anonymous ID and store it in a secure cookie (so it’s not shared across users).
At minimum, change memory/tool enabling checks to actualUserId rather than userId.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit that disables Supermemory for unauthenticated users (or generates a per-session anonymous ID).
There was a problem hiding this comment.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lib/agents/writer.tsx (1)
30-30:⚠️ Potential issue | 🟡 MinorTypos in the default system prompt.
"proconfigured" → "preconfigured" and "queires" → "queries".
Proposed fix
- There are also some proconfigured example queires. + There are also some preconfigured example queries.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/agents/writer.tsx` at line 30, Fix the typos in the default system prompt string in lib/agents/writer.tsx by replacing "proconfigured" with "preconfigured" and "queires" with "queries"; locate the prompt text used by the writer agent (e.g., the default system prompt constant or variable named like defaultSystemPrompt / SYSTEM_PROMPT / systemPrompt) and update the two misspelled words in that string so the prompt reads correctly.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (10)
app/actions.tsxlib/agents/inquire.tsxlib/agents/query-suggestor.tsxlib/agents/researcher.tsxlib/agents/resolution-search.tsxlib/agents/task-manager.tsxlib/agents/tools/index.tsxlib/agents/writer.tsxlib/utils/index.tspackage.json
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2026-01-13T13:26:30.086Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-13T13:26:30.086Z
Learning: Workers that rely on ai.generateObject (geojsonParser, map-command-generator, feedback-analyzer) must not use the X.AI 'grok-4-fast-non-reasoning' model because X.AI’s chat/completions rejects JSON Schema structured-output arguments; prefer OpenAI gpt-4o for structured outputs.
Applied to files:
lib/utils/index.tslib/agents/task-manager.tsx
📚 Learning: 2026-02-08T16:50:33.791Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-08T16:50:33.791Z
Learning: The PR branch implements RAG (Retrieval-Augmented Generation) via `lib/actions/rag.ts` which uses Supabase RPC functions `generate_embedding()` and `hybrid_search()` to retrieve relevant context from past conversations. However, there's a bug on line 433 of `app/actions.tsx` where `currentSystemPrompt` is passed to the researcher instead of `augmentedSystemPrompt`, preventing the retrieved context from being used.
Applied to files:
app/actions.tsxlib/agents/writer.tsxlib/agents/researcher.tsx
🧬 Code graph analysis (3)
lib/utils/index.ts (2)
lib/auth/get-current-user.ts (1)
getCurrentUserIdOnServer(112-127)lib/actions/users.ts (2)
getSelectedModel(146-162)saveSelectedModel(167-182)
lib/agents/resolution-search.tsx (1)
lib/utils/index.ts (1)
getModel(26-143)
lib/agents/inquire.tsx (2)
lib/utils/index.ts (1)
getModel(26-143)lib/actions/chat.ts (1)
getSystemPrompt(169-185)
🔇 Additional comments (10)
package.json (1)
48-48: LGTM!New dependency addition aligns with the PR objective for Supermemory integration. The caret range
^1.4.0is appropriate for a feature dependency.lib/agents/task-manager.tsx (1)
6-6: Signature change is clean and consistent.The optional
userIdandchatIdparameters align with the broader context-propagation pattern across all agents.lib/agents/inquire.tsx (2)
12-17: LGTM!Signature extension and model resolution with user context are consistent with the pattern across all agents.
37-50: Streaming loop correctly updates both the data stream and UI.The progressive update pattern (local state → objectStream → UI) is well-structured and ensures the Copilot component stays in sync with the partial object stream.
lib/agents/resolution-search.tsx (1)
41-48: LGTM!Signature extension is consistent with the other agents. Notably, this is the only agent that correctly uses the
requireVisionparameter (hasImage) when callinggetModel, which is appropriate for image analysis workflows.lib/agents/query-suggestor.tsx (1)
8-13: LGTM!Signature extension and model resolution changes are consistent with the other agents.
lib/agents/writer.tsx (1)
7-13: LGTM!Signature extension follows the same pattern as the other agents. Since
writerusesstreamText(not structured output), it is compatible with all providers including Grok.lib/agents/researcher.tsx (1)
90-122: LGTM — signature extension and updated call sites are consistent.
userIdandchatIdare correctly threaded from the call inapp/actions.tsxline 447 throughgetModel(line 117) andgetTools(line 121). Making them optional preserves backward compatibility.app/actions.tsx (2)
108-108: Remaining agent call-site propagation is consistent and correct.
userIdandchatIdare uniformly threaded throughresolutionSearch,querySuggestor,taskManager,inquire,writer, and the secondquerySuggestorinvocation. The approach is consistent across all six sites.Also applies to: 154-154, 410-410, 417-417, 485-485, 491-491
447-447: > Likely an incorrect or invalid review comment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/actions.tsx`:
- Around line 44-46: Hoist the duplicated import by replacing the inline dynamic
imports of "@/lib/auth/get-current-user" with a single shared import: import {
getCurrentUserIdOnServer } from '@/lib/auth/get-current-user' (or create a small
helper that exports getCurrentUserIdOnServer) and use that exported function in
both the top-level code that computes userId (actualUserId/userId) and inside
onSetAIState; then remove the duplicate await import(...) call inside
onSetAIState so the module is resolved once and reused.
- Around line 44-46: The current fallback sets userId to the literal string
'anonymous' (from the variables actualUserId and userId after calling
getCurrentUserIdOnServer()), which makes unauthenticated sessions truthy and
shares a single memory namespace; change the fallback so that when actualUserId
is missing you assign undefined (i.e., leave userId undefined for
unauthenticated users) so the guard in the memory tools registration (the check
using process.env.SUPERMEMORY_API_KEY && userId) will correctly reject
unauthenticated sessions and prevent shared Supermemory access.
In `@lib/agents/researcher.tsx`:
- Line 35: The system prompt unconditionally tells the AI to use searchMemories
and addMemory even though lib/agents/tools/index.tsx only registers those tools
when SUPERMEMORY_API_KEY and a real userId exist, and getDefaultSystemPrompt may
be bypassed for custom prompts; update the researcher function to detect whether
the memory tools are actually registered (e.g., check the same condition used in
lib/agents/tools/index.tsx: SUPERMEMORY_API_KEY presence and a valid userId) and
only inject or append the memory instructions into the system prompt when that
check passes; ensure this logic also runs when a custom prompt is provided so
the memory directive is consistently present only when searchMemories/addMemory
are available.
In `@lib/agents/task-manager.tsx`:
- Around line 18-19: taskManager calls generateObject while passing a model from
getModel which may return X.AI's grok-4-fast-non-reasoning (incompatible with
JSON Schema structured-output); update the code so getModel is asked for a
structured-output compatible model or force a compatible fallback: add a
requireStructuredOutput boolean parameter to getModel (or a dedicated
getStructuredModel helper) and use it in taskManager (and other callers like
inquire, querySuggestor, resolutionSearch) so generateObject/streamObject never
receive grok; ensure the fallback chain prefers an OpenAI model such as gpt-4o
when requireStructuredOutput is true and document the new flag in getModel's
signature.
In `@lib/agents/tools/index.tsx`:
- Around line 40-46: The supermemoryTools call is using the wrong config key;
replace the { projectId: userId } argument in the supermemoryTools(...)
invocation with { containerTags: [userId] } so the SDK scopes memories
correctly; update the call that creates memoryTools (the supermemoryTools(...)
expression) and ensure tools.searchMemories and tools.addMemory still get
assigned from the returned memoryTools object.
In `@lib/utils/index.ts`:
- Around line 134-139: The code currently calls withSupermemory(model,
actualUserId, {conversationId: chatId, mode: 'full', addMemory: 'always'}) which
unconditionally persists all conversations; change this to respect privacy by
switching addMemory from 'always' to a safer default (e.g., 'auto') or make
addMemory configurable (via an environment variable or a per-user flag) and only
enable 'always' when an explicit opt-in is present; update the call site in
lib/utils/index.ts where withSupermemory and addMemory are used (referencing
SUPERMEMORY_API_KEY, actualUserId, conversationId/chatId, and model) to read the
new config and ensure you document or wire an opt-in/check before using
'always'.
- Around line 126-129: The code creates an OpenAI client with apiKey potentially
undefined (createOpenAI called with openaiApiKey) and then returns
openai('gpt-4o') without verifying credentials; add a guard that checks
openaiApiKey (or overall provider availability) before instantiating/returning
the OpenAI model and, if missing, throw a descriptive error like "No AI provider
configured: set OPENAI_API_KEY or configure another provider" so callers get a
clear failure instead of an undefined-key runtime error; update the logic around
createOpenAI, the openai variable, and the return openai('gpt-4o') to perform
this check.
- Around line 73-82: The case for 'GPT-5.1' in the switch returns
openai('gpt-4o') which mismatches the selection; update the model identifier
used in the createOpenAI/openai call to the correct GPT-5.1 model name (or
rename the selection) so the behavior matches the label—specifically edit the
'GPT-5.1' branch that calls createOpenAI(...) and replace openai('gpt-4o') with
the proper model identifier for GPT-5.1 (or change the case label if you intend
to use gpt-4o).
- Around line 40-83: The switch on selectedModel (the switch handling 'Grok
4.2', 'Gemini 3', 'GPT-5.1' in lib/utils/index.ts) relies on each case
returning/throwing but has no break or default, which is fragile; update the
switch to include an explicit default case that either throws for unknown model
names or delegates to the existing default provider chain, and optionally add
explicit break statements (or keep the current returns) to make control flow
explicit so future edits won't accidentally fall through — locate the switch
around the selectedModel handling and implement the default behavior and/or
breaks accordingly.
- Around line 86-129: The try-catch blocks around the synchronous factory calls
(createXai -> xai('grok-4-fast-non-reasoning') and createGoogleGenerativeAI ->
google('gemini-3-pro-preview')) are ineffective because those calls only return
model references and do not perform network I/O; remove those misleading
try/catch blocks and simply select providers based on API key presence (keep the
existing selection order using createXai, createGoogleGenerativeAI,
createAmazonBedrock, createOpenAI and return the model reference directly), or
alternatively implement real runtime health checks where the actual calls are
made — but for a minimal fix remove the try/catch around xai(...) and
google(...).
- Around line 26-27: The getModel function accepts requireVision but never uses
it; update getModel(requireVision: boolean = false, userId?: string, chatId?:
string) to incorporate requireVision into model selection (inside getModel),
e.g., when requireVision is true prefer/return a vision-capable model variant or
throw a clear error if no vision model is available; modify the selection logic
that currently uses actualUserId (and any downstream chooser function) to check
requireVision first and route to the appropriate model id or configuration so
callers like resolution-search and researcher receive a vision-capable model
when hasImage is true.
---
Outside diff comments:
In `@lib/agents/writer.tsx`:
- Line 30: Fix the typos in the default system prompt string in
lib/agents/writer.tsx by replacing "proconfigured" with "preconfigured" and
"queires" with "queries"; locate the prompt text used by the writer agent (e.g.,
the default system prompt constant or variable named like defaultSystemPrompt /
SYSTEM_PROMPT / systemPrompt) and update the two misspelled words in that string
so the prompt reads correctly.
app/actions.tsx
Outdated
| const { getCurrentUserIdOnServer } = await import('@/lib/auth/get-current-user') | ||
| const actualUserId = await getCurrentUserIdOnServer() | ||
| const userId = actualUserId || 'anonymous' |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Duplicate dynamic import of @/lib/auth/get-current-user.
The same await import('@/lib/auth/get-current-user') pattern appears again at lines 630–631 inside onSetAIState. Extract it to a shared helper or hoist it to a top-level static import to avoid repeated module-resolution overhead per request.
♻️ Example consolidation
+import { getCurrentUserIdOnServer } from '@/lib/auth/get-current-user'
+
async function submit(formData?: FormData, skip?: boolean) {
'use server'
...
- const { getCurrentUserIdOnServer } = await import('@/lib/auth/get-current-user')
const actualUserId = await getCurrentUserIdOnServer()And remove the identical dynamic import block at lines 630–631.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/actions.tsx` around lines 44 - 46, Hoist the duplicated import by
replacing the inline dynamic imports of "@/lib/auth/get-current-user" with a
single shared import: import { getCurrentUserIdOnServer } from
'@/lib/auth/get-current-user' (or create a small helper that exports
getCurrentUserIdOnServer) and use that exported function in both the top-level
code that computes userId (actualUserId/userId) and inside onSetAIState; then
remove the duplicate await import(...) call inside onSetAIState so the module is
resolved once and reused.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for SUPERMEMORY_API_KEY and memory tool registration in lib/agents/tools/
echo "=== Searching for SUPERMEMORY_API_KEY usage ==="
find lib -name "*.ts" -o -name "*.tsx" | xargs grep -l "SUPERMEMORY_API_KEY" 2>/dev/null
echo -e "\n=== Full context around SUPERMEMORY_API_KEY checks ==="
rg -n --type=ts --type=tsx "SUPERMEMORY_API_KEY" -A 10 -B 5 lib/agents/tools/
echo -e "\n=== Searching for memory tool registrations ==="
rg -n --type=ts --type=tsx "searchMemories|addMemory" -B 3 -A 3 lib/agents/tools/Repository: QueueLab/QCX
Length of output: 327
🏁 Script executed:
#!/bin/bash
# Search without type flags
echo "=== lib/agents/tools/index.tsx content ==="
wc -l lib/agents/tools/index.tsx
cat -n lib/agents/tools/index.tsx | head -200
echo -e "\n=== Search for SUPERMEMORY_API_KEY in context ==="
rg -n "SUPERMEMORY_API_KEY" -B 5 -A 10 lib/agents/tools/index.tsx
echo -e "\n=== Search for userId checks ==="
rg -n "userId" -B 2 -A 2 lib/agents/tools/index.tsxRepository: QueueLab/QCX
Length of output: 2629
🏁 Script executed:
#!/bin/bash
# Verify how getTools is called from app/actions.tsx
echo "=== How getTools is called in app/actions.tsx ==="
rg -n "getTools" app/actions.tsx -B 3 -A 3
echo -e "\n=== Verify userId variable at point of call ==="
rg -n "const userId|getTools\|await getTools" app/actions.tsx -B 5 -A 2Repository: QueueLab/QCX
Length of output: 518
🏁 Script executed:
#!/bin/bash
# Find all getTools invocations more broadly
echo "=== Search for getTools in entire app/actions.tsx ==="
rg -n "getTools" app/actions.tsx
echo -e "\n=== Check if userId is used elsewhere in the file ==="
rg -n "userId" app/actions.tsx -B 2 -A 2
echo -e "\n=== Search for tool-related operations ==="
rg -n "tools\[|tools\." app/actions.tsx | head -20Repository: QueueLab/QCX
Length of output: 2502
🏁 Script executed:
#!/bin/bash
# researcher and writer likely call getTools internally
echo "=== Find researcher function definition ==="
rg -n "export.*researcher|function researcher" -A 15 lib/
echo -e "\n=== Find where getTools is actually called ==="
rg -n "getTools\(" lib/ -B 3 -A 3Repository: QueueLab/QCX
Length of output: 1564
🏁 Script executed:
#!/bin/bash
# Verify writer function also passes userId to getTools
echo "=== Find writer function and getTools usage ==="
rg -n "export.*writer|function writer" lib/ -A 20 | grep -A 20 "writer"
echo -e "\n=== Check writer.tsx specifically ==="
rg -n "getTools" lib/agents/writer.tsx -B 3 -A 3Repository: QueueLab/QCX
Length of output: 2079
🏁 Script executed:
#!/bin/bash
# Check if writer.tsx uses getTools at all
echo "=== Full search for getTools in lib/agents/ ==="
rg -l "getTools" lib/agents/
echo -e "\n=== All usages of getTools ==="
rg -n "getTools" lib/agents/ -B 2 -A 2
echo -e "\n=== Check writer.tsx structure ==="
wc -l lib/agents/writer.tsx
rg -n "tools|getTools" lib/agents/writer.tsxRepository: QueueLab/QCX
Length of output: 1181
🏁 Script executed:
#!/bin/bash
# Verify writer doesn't use tools at all
echo "=== Writer.tsx full content ==="
cat -n lib/agents/writer.tsx
echo -e "\n=== Verify supermemory projectId usage ==="
rg -n "projectId" lib/Repository: QueueLab/QCX
Length of output: 3148
'anonymous' fallback enables shared Supermemory access for all unauthenticated users — privacy breach.
The guard in lib/agents/tools/index.tsx at line 40 is a simple truthy check: if (process.env.SUPERMEMORY_API_KEY && userId). The string 'anonymous' from line 46 of app/actions.tsx is truthy, so memory tools (searchMemories, addMemory) are registered and invoked for every unauthenticated session with projectId: 'anonymous' (line 42). All guest users share this single namespace — any memory written by one unauthenticated session is immediately readable and pollutable by every other unauthenticated user.
Change line 46 to pass undefined for unauthenticated users:
- const userId = actualUserId || 'anonymous'
+ const userId = actualUserId || undefinedThis ensures the guard at lib/agents/tools/index.tsx:40 naturally rejects unauthenticated users and prevents memory tools from being registered.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const { getCurrentUserIdOnServer } = await import('@/lib/auth/get-current-user') | |
| const actualUserId = await getCurrentUserIdOnServer() | |
| const userId = actualUserId || 'anonymous' | |
| const { getCurrentUserIdOnServer } = await import('@/lib/auth/get-current-user') | |
| const actualUserId = await getCurrentUserIdOnServer() | |
| const userId = actualUserId || undefined |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/actions.tsx` around lines 44 - 46, The current fallback sets userId to
the literal string 'anonymous' (from the variables actualUserId and userId after
calling getCurrentUserIdOnServer()), which makes unauthenticated sessions truthy
and shares a single memory namespace; change the fallback so that when
actualUserId is missing you assign undefined (i.e., leave userId undefined for
unauthenticated users) so the guard in the memory tools registration (the check
using process.env.SUPERMEMORY_API_KEY && userId) will correctly reject
unauthenticated sessions and prevent shared Supermemory access.
lib/agents/researcher.tsx
Outdated
| 3. **Search Specificity:** When using the 'search' tool, formulate queries that are as specific as possible. | ||
| 4. **Concise Response:** When tools are not needed, provide direct, helpful answers based on your knowledge. Match the user's language. | ||
| 5. **Citations:** Always cite source URLs when using information from tools. | ||
| 6. **Long-term Memory:** You have access to the user's long-term memory. Use 'searchMemories' to retrieve past preferences, business intricacies, or context from previous yearly usage. Use 'addMemory' to save new preferences or important business details that should be remembered across sessions to improve personalized service incrementally. |
There was a problem hiding this comment.
Memory tools directive is unconditional but tools are conditionally registered.
The system prompt always instructs the AI to call searchMemories and addMemory, but (per the AI summary) lib/agents/tools/index.tsx only registers those tools when SUPERMEMORY_API_KEY is set and a real userId is provided. This creates two failure modes:
- No API key / unauthenticated user → AI is told it has memory tools that don't exist in the active tool set, likely producing tool-call errors or confusing fallback behavior.
- Custom system prompt path (line 105–108) →
getDefaultSystemPromptis skipped entirely, so authenticated users with a custom prompt receive no memory-tool instructions at all—inconsistent behavior across user segments.
The directive should be injected conditionally, or appended to any active system prompt (custom or default) only when the tools are actually available.
🛠️ Suggested approach
-const getDefaultSystemPrompt = (date: string, drawnFeatures?: DrawnFeature[]) => raw`
+const getDefaultSystemPrompt = (date: string, drawnFeatures?: DrawnFeature[], hasMemoryTools?: boolean) => raw`
...
-6. **Long-term Memory:** You have access to the user's long-term memory. Use 'searchMemories' to retrieve past preferences ...
+${hasMemoryTools ? `6. **Long-term Memory:** You have access to the user's long-term memory. Use 'searchMemories' to retrieve past preferences, business intricacies, or context from previous yearly usage. Use 'addMemory' to save new preferences or important business details that should be remembered across sessions to improve personalized service incrementally.` : ''}And in the researcher function body:
+const hasMemoryTools = !!process.env.SUPERMEMORY_API_KEY && !!userId && userId !== 'anonymous'
+const memoryDirective = hasMemoryTools
+ ? `\n6. **Long-term Memory:** ...`
+ : ''
const systemPromptToUse =
dynamicSystemPrompt?.trim()
- ? dynamicSystemPrompt
+ ? dynamicSystemPrompt + memoryDirective
- : getDefaultSystemPrompt(currentDate, drawnFeatures)
+ : getDefaultSystemPrompt(currentDate, drawnFeatures, hasMemoryTools)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/agents/researcher.tsx` at line 35, The system prompt unconditionally
tells the AI to use searchMemories and addMemory even though
lib/agents/tools/index.tsx only registers those tools when SUPERMEMORY_API_KEY
and a real userId exist, and getDefaultSystemPrompt may be bypassed for custom
prompts; update the researcher function to detect whether the memory tools are
actually registered (e.g., check the same condition used in
lib/agents/tools/index.tsx: SUPERMEMORY_API_KEY presence and a valid userId) and
only inject or append the memory instructions into the system prompt when that
check passes; ensure this logic also runs when a custom prompt is provided so
the memory directive is consistently present only when searchMemories/addMemory
are available.
lib/agents/task-manager.tsx
Outdated
| const result = await generateObject({ | ||
| model: (await getModel()) as LanguageModel, | ||
| model: (await getModel(false, userId, chatId)) as LanguageModel, |
There was a problem hiding this comment.
generateObject may fail with the default Grok model.
taskManager uses generateObject (structured output), but the default fallback chain in getModel returns grok-4-fast-non-reasoning first. X.AI's API rejects JSON Schema structured-output arguments for this model. This will cause a runtime error when XAI_API_KEY is set and no specific model is selected.
Consider either: (1) forcing a compatible model (e.g., OpenAI gpt-4o) for agents that require structured output, or (2) adding a requireStructuredOutput flag to getModel that skips Grok in the fallback chain.
This same issue applies to other agents using generateObject/streamObject (inquire, querySuggestor, resolutionSearch).
Based on learnings: "Workers that rely on ai.generateObject … must not use the X.AI 'grok-4-fast-non-reasoning' model because X.AI's chat/completions rejects JSON Schema structured-output arguments; prefer OpenAI gpt-4o for structured outputs."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/agents/task-manager.tsx` around lines 18 - 19, taskManager calls
generateObject while passing a model from getModel which may return X.AI's
grok-4-fast-non-reasoning (incompatible with JSON Schema structured-output);
update the code so getModel is asked for a structured-output compatible model or
force a compatible fallback: add a requireStructuredOutput boolean parameter to
getModel (or a dedicated getStructuredModel helper) and use it in taskManager
(and other callers like inquire, querySuggestor, resolutionSearch) so
generateObject/streamObject never receive grok; ensure the fallback chain
prefers an OpenAI model such as gpt-4o when requireStructuredOutput is true and
document the new flag in getModel's signature.
| if (process.env.SUPERMEMORY_API_KEY && userId) { | ||
| const memoryTools = supermemoryTools(process.env.SUPERMEMORY_API_KEY, { | ||
| projectId: userId | ||
| }) | ||
| tools.searchMemories = memoryTools.searchMemories | ||
| tools.addMemory = memoryTools.addMemory | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
@supermemory/tools ai-sdk supermemoryTools API documentation projectId
💡 Result:
-
Vercel AI SDK (
supermemoryTools) lives in@supermemory/tools/ai-sdkand is documented in Supermemory’s “AI SDK Integration” cookbook. In their examples, “projectId” is not a special Supermemory field—it’s just your app’s identifier that you pass as a container tag to scope/isolate memories per project/user/org:
tools: supermemoryTools(SUPERMEMORY_API_KEY, { containerTags: [projectId] }). [1] -
The Vercel AI SDK docs also show the same integration pattern:
supermemoryTools(process.env.SUPERMEMORY_API_KEY!). [2] -
If you specifically need a
projectIdoption, that appears in Supermemory’s Claude Memory Tool API (@supermemory/tools/claude-memory) wherecreateClaudeMemoryTool(apiKey, config)supportsprojectId(and notescontainerTagsas an alternative). [3]
Sources: [1] (supermemory.ai) [2] (ai-sdk.dev) [3] (supermemory.ai)
Citations:
- 1: https://supermemory.ai/docs/cookbook/ai-sdk-integration
- 2: https://ai-sdk.dev/docs/agents/memory?utm_source=openai
- 3: https://supermemory.ai/docs/memory-api/sdks/anthropic-claude-memory
🏁 Script executed:
# First, let's examine the file structure and understand the context better
cd lib/agents/tools && head -50 index.tsxRepository: QueueLab/QCX
Length of output: 1283
🏁 Script executed:
# Look for any package.json or lock files that might indicate Supermemory version
cd / && find . -name "package*.json" -type f | head -5 | xargs ls -laRepository: QueueLab/QCX
Length of output: 685
🏁 Script executed:
# Search for any documentation or examples of supermemoryTools usage in the repo
rg "supermemoryTools\|projectId" --type tsx --type ts --type json -B 2 -A 2Repository: QueueLab/QCX
Length of output: 83
🏁 Script executed:
# Let me verify if there are any other references to this pattern in the codebase
rg "supermemoryTools|containerTags" --type ts --type tsx -B 1 -A 1Repository: QueueLab/QCX
Length of output: 83
🏁 Script executed:
# Also check if there's a node_modules or package-lock that confirms the installed version
cat package.json | grep -A 2 "@supermemory"Repository: QueueLab/QCX
Length of output: 163
🏁 Script executed:
# Let me search for the actual code using a different approach
grep -r "supermemoryTools" --include="*.ts" --include="*.tsx" --include="*.js" --include="*.jsx" -B 2 -A 2Repository: QueueLab/QCX
Length of output: 695
🏁 Script executed:
# Check if there are any test files or examples using supermemoryTools
find . -type f \( -name "*.md" -o -name "*.test.*" -o -name "*.spec.*" \) -exec grep -l "supermemoryTools\|containerTags" {} \;Repository: QueueLab/QCX
Length of output: 38
Fix incorrect supermemoryTools API call — use containerTags instead of projectId.
The configuration parameter is incorrect. According to Supermemory's official AI SDK documentation, the correct pattern is supermemoryTools(apiKey, { containerTags: [userId] }), not projectId. Update line 41-42 to:
const memoryTools = supermemoryTools(process.env.SUPERMEMORY_API_KEY, {
containerTags: [userId]
})
The containerTags parameter accepts an array and is used to scope/isolate memories by project/user. Using the wrong parameter name will prevent proper user-level memory isolation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/agents/tools/index.tsx` around lines 40 - 46, The supermemoryTools call
is using the wrong config key; replace the { projectId: userId } argument in the
supermemoryTools(...) invocation with { containerTags: [userId] } so the SDK
scopes memories correctly; update the call that creates memoryTools (the
supermemoryTools(...) expression) and ensure tools.searchMemories and
tools.addMemory still get assigned from the returned memoryTools object.
lib/utils/index.ts
Outdated
| export async function getModel(requireVision: boolean = false, userId?: string, chatId?: string) { | ||
| const actualUserId = userId || await getCurrentUserIdOnServer(); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check how requireVision was used in the previous version and if any caller depends on it
rg -n 'requireVision' --type=ts
rg -n 'getModel\s*\(' --type=ts -C2Repository: QueueLab/QCX
Length of output: 3798
🏁 Script executed:
# Read the getModel function implementation
cat -n lib/utils/index.ts | head -60 | tail -40Repository: QueueLab/QCX
Length of output: 2018
🏁 Script executed:
# Search for getBaseModel function definition
rg -n 'getBaseModel' --type=ts -C5Repository: QueueLab/QCX
Length of output: 1284
🏁 Script executed:
# Read the full getModel function implementation
cat -n lib/utils/index.ts | sed -n '26,160p'Repository: QueueLab/QCX
Length of output: 4802
🏁 Script executed:
# Search for vision-related code and model capabilities
rg -n 'vision' --type=ts -C2
rg -n 'image' --type=ts -i | head -40Repository: QueueLab/QCX
Length of output: 3173
🏁 Script executed:
# Check if there's a separate vision model or any branching logic we might have missed
rg -n 'gpt-4-vision|claude.*vision|gemini.*vision' --type=tsRepository: QueueLab/QCX
Length of output: 38
🏁 Script executed:
# Check what the models actually are and if they're all vision-capable
rg -n "grok-4|gemini-3|gpt-4o|bedrock|claude" --type=ts -C1Repository: QueueLab/QCX
Length of output: 1879
🏁 Script executed:
# Look for any documentation or comments about model capabilities
rg -n "vision|image.*support|vision.*capable" --type=ts --type=mdRepository: QueueLab/QCX
Length of output: 508
requireVision parameter is accepted but never used in model selection.
The requireVision flag is passed by callers (e.g., resolution-search and researcher pass hasImage) but the function contains no branching logic to act on it. Despite callers intentionally varying this parameter based on whether images are present, it has no effect on model selection or any other logic. This suggests incomplete implementation that could mask issues when vision capabilities are actually needed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/utils/index.ts` around lines 26 - 27, The getModel function accepts
requireVision but never uses it; update getModel(requireVision: boolean = false,
userId?: string, chatId?: string) to incorporate requireVision into model
selection (inside getModel), e.g., when requireVision is true prefer/return a
vision-capable model variant or throw a clear error if no vision model is
available; modify the selection logic that currently uses actualUserId (and any
downstream chooser function) to check requireVision first and route to the
appropriate model id or configuration so callers like resolution-search and
researcher receive a vision-capable model when hasImage is true.
lib/utils/index.ts
Outdated
| case 'GPT-5.1': | ||
| if (openaiApiKey) { | ||
| const openai = createOpenAI({ | ||
| apiKey: openaiApiKey, | ||
| }); | ||
| return openai('gpt-4o'); | ||
| } else { | ||
| console.error('User selected "GPT-5.1" but OPENAI_API_KEY is not set.'); | ||
| throw new Error('Selected model is not configured.'); | ||
| } |
There was a problem hiding this comment.
'GPT-5.1' selection returns gpt-4o — model identifier mismatch.
When a user selects "GPT-5.1", the code returns openai('gpt-4o'). This is likely a placeholder or mistake — the user expects GPT-5.1 but gets GPT-4o. Update the model identifier to match the selection, or rename the selection label to accurately reflect the underlying model.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/utils/index.ts` around lines 73 - 82, The case for 'GPT-5.1' in the
switch returns openai('gpt-4o') which mismatches the selection; update the model
identifier used in the createOpenAI/openai call to the correct GPT-5.1 model
name (or rename the selection) so the behavior matches the label—specifically
edit the 'GPT-5.1' branch that calls createOpenAI(...) and replace
openai('gpt-4o') with the proper model identifier for GPT-5.1 (or change the
case label if you intend to use gpt-4o).
lib/utils/index.ts
Outdated
| // Default behavior: Grok -> Gemini -> Bedrock -> OpenAI | ||
| if (xaiApiKey) { | ||
| const xai = createXai({ | ||
| apiKey: xaiApiKey, | ||
| baseURL: 'https://api.x.ai/v1', | ||
| }); | ||
| try { | ||
| return xai('grok-4-fast-non-reasoning'); | ||
| } catch (error) { | ||
| console.warn('xAI API unavailable, falling back to next provider:'); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| if (gemini3ProApiKey) { | ||
| const google = createGoogleGenerativeAI({ | ||
| apiKey: gemini3ProApiKey, | ||
| }); | ||
| try { | ||
| return google('gemini-3-pro-preview'); | ||
| } catch (error) { | ||
| console.warn('Gemini 3 Pro API unavailable, falling back to next provider:', error); | ||
| if (gemini3ProApiKey) { | ||
| const google = createGoogleGenerativeAI({ | ||
| apiKey: gemini3ProApiKey, | ||
| }); | ||
| try { | ||
| return google('gemini-3-pro-preview'); | ||
| } catch (error) { | ||
| console.warn('Gemini 3 Pro API unavailable, falling back to next provider:', error); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| if (awsAccessKeyId && awsSecretAccessKey) { | ||
| const bedrock = createAmazonBedrock({ | ||
| bedrockOptions: { | ||
| region: awsRegion, | ||
| credentials: { | ||
| accessKeyId: awsAccessKeyId, | ||
| secretAccessKey: awsSecretAccessKey, | ||
| if (awsAccessKeyId && awsSecretAccessKey) { | ||
| const bedrock = createAmazonBedrock({ | ||
| bedrockOptions: { | ||
| region: awsRegion, | ||
| credentials: { | ||
| accessKeyId: awsAccessKeyId, | ||
| secretAccessKey: awsSecretAccessKey, | ||
| }, | ||
| }, | ||
| }, | ||
| }); | ||
| const model = bedrock(bedrockModelId, { | ||
| additionalModelRequestFields: { top_k: 350 }, | ||
| }); | ||
| return model; | ||
| } | ||
|
|
||
| const openai = createOpenAI({ | ||
| apiKey: openaiApiKey, | ||
| }); | ||
| const model = bedrock(bedrockModelId, { | ||
| additionalModelRequestFields: { top_k: 350 }, | ||
| return openai('gpt-4o'); |
There was a problem hiding this comment.
Try-catch fallback chain is ineffective — provider factory calls don't make network requests.
createXai(...)('grok-4-fast-non-reasoning') and createGoogleGenerativeAI(...)('gemini-3-pro-preview') are synchronous factory calls that return model reference objects. They don't perform any network I/O, so they won't throw on API unavailability. The actual failure occurs later when generateObject/streamText/streamObject is called with the model.
This means the fallback chain (Grok → Gemini → Bedrock → OpenAI) will always stop at the first provider whose API key is set, regardless of whether that provider is actually reachable. If Grok's key is set but the service is down, the caller gets a broken model reference with no fallback.
Consider either:
- Removing the misleading try-catch blocks and relying on the API key presence as the sole selection criterion (current effective behavior).
- Implementing actual health checks or wrapping the downstream call with retry/fallback logic at the call site.
Option 1: Remove misleading try-catch blocks (minimal fix)
// Default behavior: Grok -> Gemini -> Bedrock -> OpenAI
if (xaiApiKey) {
const xai = createXai({
apiKey: xaiApiKey,
baseURL: 'https://api.x.ai/v1',
});
- try {
- return xai('grok-4-fast-non-reasoning');
- } catch (error) {
- console.warn('xAI API unavailable, falling back to next provider:');
- }
+ return xai('grok-4-fast-non-reasoning');
}
if (gemini3ProApiKey) {
const google = createGoogleGenerativeAI({
apiKey: gemini3ProApiKey,
});
- try {
- return google('gemini-3-pro-preview');
- } catch (error) {
- console.warn('Gemini 3 Pro API unavailable, falling back to next provider:', error);
- }
+ return google('gemini-3-pro-preview');
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Default behavior: Grok -> Gemini -> Bedrock -> OpenAI | |
| if (xaiApiKey) { | |
| const xai = createXai({ | |
| apiKey: xaiApiKey, | |
| baseURL: 'https://api.x.ai/v1', | |
| }); | |
| try { | |
| return xai('grok-4-fast-non-reasoning'); | |
| } catch (error) { | |
| console.warn('xAI API unavailable, falling back to next provider:'); | |
| } | |
| } | |
| } | |
| if (gemini3ProApiKey) { | |
| const google = createGoogleGenerativeAI({ | |
| apiKey: gemini3ProApiKey, | |
| }); | |
| try { | |
| return google('gemini-3-pro-preview'); | |
| } catch (error) { | |
| console.warn('Gemini 3 Pro API unavailable, falling back to next provider:', error); | |
| if (gemini3ProApiKey) { | |
| const google = createGoogleGenerativeAI({ | |
| apiKey: gemini3ProApiKey, | |
| }); | |
| try { | |
| return google('gemini-3-pro-preview'); | |
| } catch (error) { | |
| console.warn('Gemini 3 Pro API unavailable, falling back to next provider:', error); | |
| } | |
| } | |
| } | |
| if (awsAccessKeyId && awsSecretAccessKey) { | |
| const bedrock = createAmazonBedrock({ | |
| bedrockOptions: { | |
| region: awsRegion, | |
| credentials: { | |
| accessKeyId: awsAccessKeyId, | |
| secretAccessKey: awsSecretAccessKey, | |
| if (awsAccessKeyId && awsSecretAccessKey) { | |
| const bedrock = createAmazonBedrock({ | |
| bedrockOptions: { | |
| region: awsRegion, | |
| credentials: { | |
| accessKeyId: awsAccessKeyId, | |
| secretAccessKey: awsSecretAccessKey, | |
| }, | |
| }, | |
| }, | |
| }); | |
| const model = bedrock(bedrockModelId, { | |
| additionalModelRequestFields: { top_k: 350 }, | |
| }); | |
| return model; | |
| } | |
| const openai = createOpenAI({ | |
| apiKey: openaiApiKey, | |
| }); | |
| const model = bedrock(bedrockModelId, { | |
| additionalModelRequestFields: { top_k: 350 }, | |
| return openai('gpt-4o'); | |
| // Default behavior: Grok -> Gemini -> Bedrock -> OpenAI | |
| if (xaiApiKey) { | |
| const xai = createXai({ | |
| apiKey: xaiApiKey, | |
| baseURL: 'https://api.x.ai/v1', | |
| }); | |
| return xai('grok-4-fast-non-reasoning'); | |
| } | |
| if (gemini3ProApiKey) { | |
| const google = createGoogleGenerativeAI({ | |
| apiKey: gemini3ProApiKey, | |
| }); | |
| return google('gemini-3-pro-preview'); | |
| } | |
| if (awsAccessKeyId && awsSecretAccessKey) { | |
| const bedrock = createAmazonBedrock({ | |
| bedrockOptions: { | |
| region: awsRegion, | |
| credentials: { | |
| accessKeyId: awsAccessKeyId, | |
| secretAccessKey: awsSecretAccessKey, | |
| }, | |
| }, | |
| }); | |
| const model = bedrock(bedrockModelId, { | |
| additionalModelRequestFields: { top_k: 350 }, | |
| }); | |
| return model; | |
| } | |
| const openai = createOpenAI({ | |
| apiKey: openaiApiKey, | |
| }); | |
| return openai('gpt-4o'); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/utils/index.ts` around lines 86 - 129, The try-catch blocks around the
synchronous factory calls (createXai -> xai('grok-4-fast-non-reasoning') and
createGoogleGenerativeAI -> google('gemini-3-pro-preview')) are ineffective
because those calls only return model references and do not perform network I/O;
remove those misleading try/catch blocks and simply select providers based on
API key presence (keep the existing selection order using createXai,
createGoogleGenerativeAI, createAmazonBedrock, createOpenAI and return the model
reference directly), or alternatively implement real runtime health checks where
the actual calls are made — but for a minimal fix remove the try/catch around
xai(...) and google(...).
lib/utils/index.ts
Outdated
| const openai = createOpenAI({ | ||
| apiKey: openaiApiKey, | ||
| }); | ||
| const model = bedrock(bedrockModelId, { | ||
| additionalModelRequestFields: { top_k: 350 }, | ||
| return openai('gpt-4o'); |
There was a problem hiding this comment.
Missing OPENAI_API_KEY guard on the final fallback.
If no provider API keys are configured (XAI, Gemini, Bedrock), this code creates an OpenAI client with apiKey: undefined. This will fail at call time with an unhelpful error. Add a guard and throw a descriptive error if no provider is available.
Proposed fix
+ if (!openaiApiKey) {
+ throw new Error('No AI provider is configured. Please set at least one API key (XAI_API_KEY, GEMINI_3_PRO_API_KEY, AWS credentials, or OPENAI_API_KEY).');
+ }
const openai = createOpenAI({
apiKey: openaiApiKey,
});
return openai('gpt-4o');📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const openai = createOpenAI({ | |
| apiKey: openaiApiKey, | |
| }); | |
| const model = bedrock(bedrockModelId, { | |
| additionalModelRequestFields: { top_k: 350 }, | |
| return openai('gpt-4o'); | |
| if (!openaiApiKey) { | |
| throw new Error('No AI provider is configured. Please set at least one API key (XAI_API_KEY, GEMINI_3_PRO_API_KEY, AWS credentials, or OPENAI_API_KEY).'); | |
| } | |
| const openai = createOpenAI({ | |
| apiKey: openaiApiKey, | |
| }); | |
| return openai('gpt-4o'); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/utils/index.ts` around lines 126 - 129, The code creates an OpenAI client
with apiKey potentially undefined (createOpenAI called with openaiApiKey) and
then returns openai('gpt-4o') without verifying credentials; add a guard that
checks openaiApiKey (or overall provider availability) before
instantiating/returning the OpenAI model and, if missing, throw a descriptive
error like "No AI provider configured: set OPENAI_API_KEY or configure another
provider" so callers get a clear failure instead of an undefined-key runtime
error; update the logic around createOpenAI, the openai variable, and the return
openai('gpt-4o') to perform this check.
- Integrated Supermemory AI SDK for persistent agent memory. - Refactored `lib/utils/index.ts` to separate server-only AI logic into `lib/utils/ai.ts`. - This fixes a build error where client components (like `chat.tsx`) were indirectly importing `next/headers`. - Updated all agents and actions to use the new server-only utility for model retrieval. - Propagated `userId` and `chatId` to all agents for user-scoped memory. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
lib/agents/writer.tsx (1)
30-30:⚠️ Potential issue | 🟡 MinorTypo: "proconfigured" → "preconfigured"
- There are also some proconfigured example queires. + There are also some preconfigured example queries.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/agents/writer.tsx` at line 30, Fix the typos in the displayed string "There are also some proconfigured example queires." by changing "proconfigured" to "preconfigured" and "queires" to "queries" wherever that literal appears (e.g., in the JSX/constant inside lib/agents/writer.tsx that renders that sentence).lib/actions/suggest.ts (1)
29-35:⚠️ Potential issue | 🟠 Major
streamObjectingetSuggestionswill fail at runtime when the default model is Grok
getModel()is called with no arguments, so the default fallback chain applies: ifXAI_API_KEYis set, it returnsgrok-4-fast-non-reasoning. However,streamObjectwith a Zod schema requires JSON Schema structured-output support, which xAI's API rejects for this model. This causes a runtime failure whenever XAI is the configured default provider.Additionally, unlike other
streamObjectcallers (query-suggestor,inquire,resolution-search),getSuggestionshas nouserId/chatIdparameters, creating an inconsistency in how user context is passed togetModel(). WhileSupermemorywrapping still occurs viagetCurrentUserIdOnServer()internally, confirm whether suggestion calls should maintain explicit user-context scoping for consistency.Prefer
gpt-4oforstreamObjecttasks with structured outputs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/actions/suggest.ts` around lines 29 - 35, getSuggestions currently calls streamObject with (await getModel()) which can return the Grok model and fail for Zod/schema-based structured output; change getSuggestions to request a model that supports JSON Schema (prefer gpt-4o) by calling getModel('gpt-4o') or otherwise forcing a non-xAI model when invoking streamObject, and ensure you pass explicit user context by adding optional userId/chatId parameters to getSuggestions and forwarding them into getModel (or call getCurrentUserIdOnServer() and pass that id into getModel) so that Supermemory scoping matches other callers; update the call-site symbols streamObject, getModel, getSuggestions, relatedSchema, systemPrompt, query accordingly.lib/agents/resolution-search.tsx (1)
92-97: 🧹 Nitpick | 🔵 TrivialConsider skipping memory wrapping for schema-constrained image analysis agent.
While
withSupermemorymiddleware is compatible withstreamObjectat the AI SDK level (it applies to the model, not the function), the@supermemory/toolsdocumentation doesn't demonstrate this pattern—it focuses ongenerateText/streamTextexamples. More importantly, memory injection adds a fetch API call on every invocation, which adds latency without benefit: sincestreamObjectcarries notoolskey here, there's no write path foraddMemory, and user memories are unlikely to be relevant for a tightly schema-constrained geospatial image-analysis response.Consider guarding the
getModelcall to skip memory wrapping for schema-constrained agents:💡 Suggested guard
- model: await getModel(hasImage, userId, chatId), + // Resolution search is schema-constrained; memory context is irrelevant here. + model: await getModel(hasImage),Or expose a
withMemoryflag ingetModel's signature so callers opt-in explicitly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/agents/resolution-search.tsx` around lines 92 - 97, The current call to streamObject uses getModel(...) which may wrap the model with withSupermemory and trigger unnecessary fetches; change getModel so callers can opt out of memory wrapping (e.g., add a withMemory boolean flag to getModel or a guard when calling getModel here) and call getModel(hasImage, userId, chatId, { withMemory: false }) or similar from this resolution-search code path so that streamObject with resolutionSearchSchema isn’t wrapped by withSupermemory; update getModel signature and internal logic that applies withSupermemory accordingly and keep streamObject(system: systemPrompt, messages: filteredMessages, schema: resolutionSearchSchema) behavior unchanged.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (10)
lib/actions/suggest.tslib/agents/inquire.tsxlib/agents/query-suggestor.tsxlib/agents/researcher.tsxlib/agents/resolution-search.tsxlib/agents/task-manager.tsxlib/agents/writer.tsxlib/utils/ai.tslib/utils/index.tsmapbox_mcp/hooks.ts
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-08T16:50:33.791Z
Learning: The PR branch implements RAG (Retrieval-Augmented Generation) via `lib/actions/rag.ts` which uses Supabase RPC functions `generate_embedding()` and `hybrid_search()` to retrieve relevant context from past conversations. However, there's a bug on line 433 of `app/actions.tsx` where `currentSystemPrompt` is passed to the researcher instead of `augmentedSystemPrompt`, preventing the retrieved context from being used.
📚 Learning: 2026-02-08T16:50:33.791Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-08T16:50:33.791Z
Learning: The PR branch implements RAG (Retrieval-Augmented Generation) via `lib/actions/rag.ts` which uses Supabase RPC functions `generate_embedding()` and `hybrid_search()` to retrieve relevant context from past conversations. However, there's a bug on line 433 of `app/actions.tsx` where `currentSystemPrompt` is passed to the researcher instead of `augmentedSystemPrompt`, preventing the retrieved context from being used.
Applied to files:
lib/utils/index.tslib/agents/task-manager.tsxlib/agents/researcher.tsxlib/agents/writer.tsx
📚 Learning: 2026-01-13T13:26:30.086Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-13T13:26:30.086Z
Learning: Workers that rely on ai.generateObject (geojsonParser, map-command-generator, feedback-analyzer) must not use the X.AI 'grok-4-fast-non-reasoning' model because X.AI’s chat/completions rejects JSON Schema structured-output arguments; prefer OpenAI gpt-4o for structured outputs.
Applied to files:
lib/utils/index.tslib/actions/suggest.tslib/agents/resolution-search.tsxlib/agents/task-manager.tsxlib/utils/ai.ts
📚 Learning: 2026-01-17T06:14:51.070Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-17T06:14:51.070Z
Learning: For structured output tasks using ai.generateObject (resolution-search, geojsonParser, map-command-generator, feedback-analyzer), prefer OpenAI gpt-4o. While xAI's grok-4-1-fast-reasoning technically supports structured outputs and vision, OpenAI has proven more reliable integration with the AI SDK's generateObject function and Zod schemas in production.
Applied to files:
lib/utils/index.tslib/agents/task-manager.tsxlib/utils/ai.ts
📚 Learning: 2026-01-16T10:02:40.220Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-16T10:02:40.220Z
Learning: The router agent and resolution search agent must implement a feedback loop where tool execution results (including GeoTIFF URLs, embeddings, and satellite analysis data) are fed back to the AI model's context through enriched messages before generating the final response, ensuring the foundational model receives and can synthesize all satellite intelligence data.
Applied to files:
lib/agents/resolution-search.tsx
🧬 Code graph analysis (6)
lib/agents/task-manager.tsx (1)
lib/utils/ai.ts (1)
getModel(10-127)
lib/agents/researcher.tsx (1)
lib/utils/ai.ts (1)
getModel(10-127)
lib/utils/ai.ts (2)
lib/auth/get-current-user.ts (1)
getCurrentUserIdOnServer(112-127)lib/actions/users.ts (2)
getSelectedModel(146-162)saveSelectedModel(167-182)
lib/agents/inquire.tsx (1)
lib/utils/ai.ts (1)
getModel(10-127)
lib/agents/writer.tsx (1)
lib/utils/ai.ts (1)
getModel(10-127)
lib/agents/query-suggestor.tsx (1)
lib/utils/ai.ts (1)
getModel(10-127)
🔇 Additional comments (6)
lib/utils/index.ts (1)
1-17: LGTM — clean separation of client-safe utilitiesRemoving AI provider logic from
lib/utils/index.tscorrectly isolates server-only code and allows pure client components to importcn/generateUUIDwithout triggeringnext/headerserrors.lib/agents/resolution-search.tsx (2)
2-2: LGTM — import path updated to the new server-only AI utilities module.
41-48: LGTM — optionaluserId/chatIdparameters correctly appended.lib/agents/researcher.tsx (3)
13-13: LGTM — import updated to the refactored server-only AI utilities module.
90-92: LGTM — optional context parameters correctly appended to the signature.
116-122: LGTM — model and tools calls correctly propagate user context.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/agents/writer.tsx`:
- Around line 7-14: Update the default system prompt string used by the writer
function to correct two typos: change "proconfigured" to "preconfigured" and
"queires" to "queries"; locate the prompt text referenced or assigned inside the
writer function (parameter dynamicSystemPrompt / any local defaultPrompt
variable) and replace the misspelled words so that the prompt reads with the
corrected spellings while keeping the rest of the wording unchanged.
In `@lib/utils/ai.ts`:
- Around line 94-108: The guard before creating the Bedrock client only checks
awsAccessKeyId and awsSecretAccessKey but not awsRegion, so add a check to
ensure awsRegion is truthy (e.g., change the if to if (awsAccessKeyId &&
awsSecretAccessKey && awsRegion)) before calling createAmazonBedrock; this
ensures region passed into createAmazonBedrock (and the subsequent model created
via bedrock(bedrockModelId,...)) is defined and avoids passing undefined to
createAmazonBedrock.
- Around line 1-8: Add the server-only guard by importing 'server-only' at the
top of lib/utils/ai.ts so this module cannot be accidentally bundled into client
code; update the file that currently imports getSelectedModel,
getCurrentUserIdOnServer, createOpenAI, createGoogleGenerativeAI,
createAmazonBedrock, createXai and withSupermemory to include the side-effect
import "server-only" as the very first import statement to ensure any use of
getCurrentUserIdOnServer or database-calling functions is restricted to server
runtime.
In `@mapbox_mcp/hooks.ts`:
- Line 3: The hook useMCPMapClient imports getModel (from '@/lib/utils/ai')
which transitively pulls server-only code; remove or refactor that import to
avoid mixing client and server code in this client hook: either (a) remove
getModel from this file and fetch model data via a server-side API route or pass
model data into useMCPMapClient as a parameter, or (b) if the hook is intended
only for server-side use, mark the file as server-only (e.g., add a server
directive) and ensure no client React primitives
(useState/useEffect/useRef/useCallback) remain; update references to getModel
and the hook accordingly.
---
Outside diff comments:
In `@lib/actions/suggest.ts`:
- Around line 29-35: getSuggestions currently calls streamObject with (await
getModel()) which can return the Grok model and fail for Zod/schema-based
structured output; change getSuggestions to request a model that supports JSON
Schema (prefer gpt-4o) by calling getModel('gpt-4o') or otherwise forcing a
non-xAI model when invoking streamObject, and ensure you pass explicit user
context by adding optional userId/chatId parameters to getSuggestions and
forwarding them into getModel (or call getCurrentUserIdOnServer() and pass that
id into getModel) so that Supermemory scoping matches other callers; update the
call-site symbols streamObject, getModel, getSuggestions, relatedSchema,
systemPrompt, query accordingly.
In `@lib/agents/resolution-search.tsx`:
- Around line 92-97: The current call to streamObject uses getModel(...) which
may wrap the model with withSupermemory and trigger unnecessary fetches; change
getModel so callers can opt out of memory wrapping (e.g., add a withMemory
boolean flag to getModel or a guard when calling getModel here) and call
getModel(hasImage, userId, chatId, { withMemory: false }) or similar from this
resolution-search code path so that streamObject with resolutionSearchSchema
isn’t wrapped by withSupermemory; update getModel signature and internal logic
that applies withSupermemory accordingly and keep streamObject(system:
systemPrompt, messages: filteredMessages, schema: resolutionSearchSchema)
behavior unchanged.
In `@lib/agents/writer.tsx`:
- Line 30: Fix the typos in the displayed string "There are also some
proconfigured example queires." by changing "proconfigured" to "preconfigured"
and "queires" to "queries" wherever that literal appears (e.g., in the
JSX/constant inside lib/agents/writer.tsx that renders that sentence).
---
Duplicate comments:
In `@lib/agents/inquire.tsx`:
- Around line 27-28: The call to streamObject in inquire uses a Zod schema but
getModel(false, userId, chatId) can return the X.AI grok model which is
incompatible with ai.generateObject/streamObject; update inquire to detect the
returned model (e.g. check model.id or model.name from getModel) and if it
equals or contains "grok-4-fast-non-reasoning" (or any grok variant) swap to a
supported LLM (fallback model id used elsewhere in the codebase) or call
getModel with a flag/param that forces a compatible model; ensure this guard is
applied before calling streamObject so structured output callers never pass a
grok model to streamObject.
In `@lib/agents/query-suggestor.tsx`:
- Around line 22-24: querySuggestor calls streamObject with a Zod schema but may
receive X.AI's grok-4-fast-non-reasoning from getModel(false, userId, chatId),
which rejects structured-output args and causes runtime failures; modify the
logic around streamObject/getModel so the model used supports structured output
(e.g., prefer OpenAI gpt-4o) — either call getModel in a mode that guarantees a
structured-output-capable model (use whatever flag/variant exists instead of
getModel(false, ...)), or detect the returned model id (e.g.,
"grok-4-fast-non-reasoning" or XAI provider) and fall back to an OpenAI model
before calling streamObject; update the code paths around streamObject,
getModel, and querySuggestor so streamObject is never invoked with an X.AI grok
model.
In `@lib/agents/researcher.tsx`:
- Line 35: The code unconditionally instructs the model to use
searchMemories/addMemory and also omits memory instructions when
dynamicSystemPrompt is provided; fix this by making memory-tool use conditional:
in researcher.tsx only add the memory-related system prompt/instructions or tool
integrations (references to searchMemories and addMemory) when
SUPERMEMORY_API_KEY is set and a valid userId is available, and ensure that when
dynamicSystemPrompt is supplied you still include the memory instructions if the
env key and userId exist (i.e., merge, not replace, dynamicSystemPrompt with the
memory directives); update the logic that builds the system prompt so it checks
process.env.SUPERMEMORY_API_KEY and userId before referencing
searchMemories/addMemory and omits those references entirely otherwise.
In `@lib/agents/task-manager.tsx`:
- Around line 18-19: taskManager calls generateObject(...) with a Zod schema but
getModel(false, userId, chatId) can return the xAI model
"grok-4-fast-non-reasoning", which rejects JSON Schema structured-output and
causes runtime failures; change the call-site so that before passing the model
into generateObject you detect if the chosen model is
"grok-4-fast-non-reasoning" (or any xAI model when XAI_API_KEY is set) and
instead use an OpenAI model that supports structured output (e.g., "gpt-4o") —
alternatively, update getModel or add a helper that accepts a flag like
ensureStructuredOutputModel(userId, chatId) and returns a compatible
LanguageModel for use in generateObject to guarantee schema compatibility.
In `@lib/utils/ai.ts`:
- Around line 118-124: Change the hardcoded Supermemory persistence from
addMemory: 'always' to a safer default and make it configurable; locate the call
to withSupermemory(model, actualUserId, {...}) in lib/utils/ai.ts (the block
that passes conversationId: chatId, mode: 'full', addMemory: 'always') and
replace the literal with either 'auto' or a value derived from a new
config/feature-flag/environment variable (e.g., SUPERMEMORY_ADD_MEMORY or a
user-consent check) so memory saving is opt-in and auditable; also add a short
comment noting that withSupermemory is Beta so callers should handle potential
instability.
- Around line 110-113: The fallback branch currently calls createOpenAI({
apiKey: openaiApiKey }) and returns openai('gpt-4o') even when openaiApiKey is
undefined; update the logic in the fallback to check openaiApiKey before
creating/using the OpenAI client (identify the symbols createOpenAI,
openaiApiKey, and the call openai('gpt-4o')), and if openaiApiKey is missing
throw or return a clear error/message like "no provider configured: missing
OpenAI API key" instead of attempting to call the client.
- Around line 25-67: The switch over selectedModel has three problems: the
'GPT-5.1' case returns 'gpt-4o' (mismatch), there's no default branch to
log/unexpected selections, and the try-catch blocks around
createXai(...)('grok-4-fast-non-reasoning') and
createGoogleGenerativeAI(...)('gemini-3-pro-preview') are ineffective because
those factory calls are synchronous and won't throw here. Fix by updating the
model identifier in the 'GPT-5.1' case to the correct GPT-5.1 model name (or
rename the UI label to match 'gpt-4o') in the createOpenAI(...) return, add a
default case that logs the unrecognized selectedModel and throws, and remove the
dead try-catch blocks around the synchronous factory calls (or move error
handling to the real inference call sites); locate these changes in the switch
handling selectedModel and the functions createXai, createGoogleGenerativeAI,
and createOpenAI usages.
- Around line 10-11: getModel currently ignores the requireVision flag and
always calls getCurrentUserIdOnServer(), so update getModel to (1) consult the
requireVision boolean when selecting the model (use vision-capable models when
requireVision is true, to honor callers like resolutionSearch and researcher)
and (2) lazily resolve actualUserId by only calling await
getCurrentUserIdOnServer() when SUPERMEMORY_API_KEY is present and the userId is
actually required for the Supermemory flow; reference the getModel function, the
requireVision parameter, getCurrentUserIdOnServer(), and the SUPERMEMORY_API_KEY
check to locate where to add the model-branching logic and the deferred userId
lookup.
- Around line 71-92: The current fallback is ineffective because the synchronous
factory calls (createXai/createGoogleGenerativeAI) — and their immediate
invocations xai('grok-4-fast-non-reasoning') and google('gemini-3-pro-preview')
— cannot fail for network errors, so the chain stops at the first configured
key; change the logic to return an async wrapper (instead of returning the model
client directly) that defers calling the provider until inference time and
performs the actual network call inside a try/catch (e.g., call the model's
request/generate method inside the wrapper), catching runtime/network errors and
falling through to the next provider (refer to xai, createXai,
xai('grok-4-fast-non-reasoning'), google, createGoogleGenerativeAI,
google('gemini-3-pro-preview') to locate the spots to replace).
| export async function writer( | ||
| dynamicSystemPrompt: string, // New parameter | ||
| dynamicSystemPrompt: string, | ||
| uiStream: ReturnType<typeof createStreamableUI>, | ||
| streamText: ReturnType<typeof createStreamableValue<string>>, | ||
| messages: CoreMessage[] | ||
| messages: CoreMessage[], | ||
| userId?: string, | ||
| chatId?: string | ||
| ) { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find all call sites of writer function
rg -n --type=ts --type=tsx -B1 -A5 '\bwriter\s*\(' --glob "*.{ts,tsx}"Repository: QueueLab/QCX
Length of output: 83
🏁 Script executed:
# Check line 30 of writer.tsx for the typo
sed -n '25,35p' lib/agents/writer.tsxRepository: QueueLab/QCX
Length of output: 865
🏁 Script executed:
# Get the full context of writer.tsx to understand the changes
wc -l lib/agents/writer.tsxRepository: QueueLab/QCX
Length of output: 80
🏁 Script executed:
# Find all call sites of writer function without using -t tsx
rg -n -B1 -A5 'writer\s*\(' --include='*.ts' --include='*.tsx'Repository: QueueLab/QCX
Length of output: 497
🏁 Script executed:
# Also search for imports of writer
rg -n 'import.*writer' --include='*.ts' --include='*.tsx'Repository: QueueLab/QCX
Length of output: 497
🏁 Script executed:
# Get more context around the typo and the full writer function signature
head -n 53 lib/agents/writer.tsxRepository: QueueLab/QCX
Length of output: 2650
🏁 Script executed:
# Search for writer function calls in all ts/tsx files
rg -n 'writer\s*\(' --glob='*.ts' --glob='*.tsx'Repository: QueueLab/QCX
Length of output: 246
🏁 Script executed:
# Also search for imports of writer to find where it's used
rg -n 'from.*writer|import.*writer' --glob='*.ts' --glob='*.tsx'Repository: QueueLab/QCX
Length of output: 119
🏁 Script executed:
# Get a list of all ts/tsx files to understand the codebase structure
find . -type f \( -name '*.ts' -o -name '*.tsx' \) | head -20Repository: QueueLab/QCX
Length of output: 619
🏁 Script executed:
# Get context around the writer call at line 485
sed -n '480,490p' app/actions.tsxRepository: QueueLab/QCX
Length of output: 402
🏁 Script executed:
# Check for any other potential writer calls that might have been missed
rg -n 'writer' --glob='*.ts' --glob='*.tsx' -B2 -A2Repository: QueueLab/QCX
Length of output: 2194
Fix typos in the default system prompt
Line 27 contains two typos:
- "proconfigured" should be "preconfigured"
- "queires" should be "queries"
The call site at app/actions.tsx:485 correctly passes currentSystemPrompt as the first argument, so the positional parameter change has been properly implemented.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/agents/writer.tsx` around lines 7 - 14, Update the default system prompt
string used by the writer function to correct two typos: change "proconfigured"
to "preconfigured" and "queires" to "queries"; locate the prompt text referenced
or assigned inside the writer function (parameter dynamicSystemPrompt / any
local defaultPrompt variable) and replace the misspelled words so that the
prompt reads with the corrected spellings while keeping the rest of the wording
unchanged.
| import { getSelectedModel } from '@/lib/actions/users' | ||
| import { createOpenAI } from '@ai-sdk/openai' | ||
| import { createGoogleGenerativeAI } from '@ai-sdk/google' | ||
| import { createAmazonBedrock } from '@ai-sdk/amazon-bedrock' | ||
| import { createXai } from '@ai-sdk/xai' | ||
| import { getCurrentUserIdOnServer } from '@/lib/auth/get-current-user' | ||
| import { withSupermemory } from '@supermemory/tools/ai-sdk' | ||
| import { LanguageModel } from 'ai' |
There was a problem hiding this comment.
Add import 'server-only' to prevent accidental client bundling
lib/utils/ai.ts transitively imports @/lib/auth/get-current-user (which reads next/headers) and @/lib/actions/users (which calls db). Without an explicit server-only guard, any module that accidentally imports this file into a client bundle will cause a build error. The PR's stated goal of separating server-only AI code won't be enforced at the module level.
🛡️ Proposed fix
+import 'server-only'
+
import { getSelectedModel } from '@/lib/actions/users'
import { createOpenAI } from '@ai-sdk/openai'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { getSelectedModel } from '@/lib/actions/users' | |
| import { createOpenAI } from '@ai-sdk/openai' | |
| import { createGoogleGenerativeAI } from '@ai-sdk/google' | |
| import { createAmazonBedrock } from '@ai-sdk/amazon-bedrock' | |
| import { createXai } from '@ai-sdk/xai' | |
| import { getCurrentUserIdOnServer } from '@/lib/auth/get-current-user' | |
| import { withSupermemory } from '@supermemory/tools/ai-sdk' | |
| import { LanguageModel } from 'ai' | |
| import 'server-only' | |
| import { getSelectedModel } from '@/lib/actions/users' | |
| import { createOpenAI } from '@ai-sdk/openai' | |
| import { createGoogleGenerativeAI } from '@ai-sdk/google' | |
| import { createAmazonBedrock } from '@ai-sdk/amazon-bedrock' | |
| import { createXai } from '@ai-sdk/xai' | |
| import { getCurrentUserIdOnServer } from '@/lib/auth/get-current-user' | |
| import { withSupermemory } from '@supermemory/tools/ai-sdk' | |
| import { LanguageModel } from 'ai' |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/utils/ai.ts` around lines 1 - 8, Add the server-only guard by importing
'server-only' at the top of lib/utils/ai.ts so this module cannot be
accidentally bundled into client code; update the file that currently imports
getSelectedModel, getCurrentUserIdOnServer, createOpenAI,
createGoogleGenerativeAI, createAmazonBedrock, createXai and withSupermemory to
include the side-effect import "server-only" as the very first import statement
to ensure any use of getCurrentUserIdOnServer or database-calling functions is
restricted to server runtime.
| if (awsAccessKeyId && awsSecretAccessKey) { | ||
| const bedrock = createAmazonBedrock({ | ||
| bedrockOptions: { | ||
| region: awsRegion, | ||
| credentials: { | ||
| accessKeyId: awsAccessKeyId, | ||
| secretAccessKey: awsSecretAccessKey, | ||
| }, | ||
| }, | ||
| }); | ||
| const model = bedrock(bedrockModelId, { | ||
| additionalModelRequestFields: { top_k: 350 }, | ||
| }); | ||
| return model; | ||
| } |
There was a problem hiding this comment.
AWS_REGION is not validated before being passed to Bedrock
awsRegion can be undefined (Line 97). The guard on Line 94 checks only for credentials, not region. Passing region: undefined to createAmazonBedrock may produce an unhelpful runtime error. Add && awsRegion to the guard condition.
🐛 Proposed fix
- if (awsAccessKeyId && awsSecretAccessKey) {
+ if (awsAccessKeyId && awsSecretAccessKey && awsRegion) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (awsAccessKeyId && awsSecretAccessKey) { | |
| const bedrock = createAmazonBedrock({ | |
| bedrockOptions: { | |
| region: awsRegion, | |
| credentials: { | |
| accessKeyId: awsAccessKeyId, | |
| secretAccessKey: awsSecretAccessKey, | |
| }, | |
| }, | |
| }); | |
| const model = bedrock(bedrockModelId, { | |
| additionalModelRequestFields: { top_k: 350 }, | |
| }); | |
| return model; | |
| } | |
| if (awsAccessKeyId && awsSecretAccessKey && awsRegion) { | |
| const bedrock = createAmazonBedrock({ | |
| bedrockOptions: { | |
| region: awsRegion, | |
| credentials: { | |
| accessKeyId: awsAccessKeyId, | |
| secretAccessKey: awsSecretAccessKey, | |
| }, | |
| }, | |
| }); | |
| const model = bedrock(bedrockModelId, { | |
| additionalModelRequestFields: { top_k: 350 }, | |
| }); | |
| return model; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/utils/ai.ts` around lines 94 - 108, The guard before creating the Bedrock
client only checks awsAccessKeyId and awsSecretAccessKey but not awsRegion, so
add a check to ensure awsRegion is truthy (e.g., change the if to if
(awsAccessKeyId && awsSecretAccessKey && awsRegion)) before calling
createAmazonBedrock; this ensures region passed into createAmazonBedrock (and
the subsequent model created via bedrock(bedrockModelId,...)) is defined and
avoids passing undefined to createAmazonBedrock.
| import { useState, useCallback, useRef, useEffect } from 'react'; | ||
| import { generateText } from 'ai'; | ||
| import { getModel } from '@/lib/utils'; | ||
| import { getModel } from '@/lib/utils/ai'; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find all imports of useMCPMapClient to determine if it's used in client components
echo "=== Searching for useMCPMapClient imports ==="
rg -n "useMCPMapClient|mapbox_mcp/hooks" --type=ts --type=tsx -B2
echo ""
echo "=== Checking for 'use client' in files that import useMCPMapClient ==="
# Find files that import useMCPMapClient, then check if they have 'use client'
rg -l "useMCPMapClient" --type=ts --type=tsx | while read file; do
if grep -q "use client" "$file"; then
echo "$file has 'use client' directive"
fi
done
echo ""
echo "=== Files importing useMCPMapClient ==="
rg -l "useMCPMapClient" --type=ts --type=tsxRepository: QueueLab/QCX
Length of output: 315
🏁 Script executed:
#!/bin/bash
# Examine `@/lib/utils/ai` to see what it imports and exports
echo "=== Checking `@/lib/utils/ai.ts` ==="
if [ -f "src/lib/utils/ai.ts" ]; then
cat -n "src/lib/utils/ai.ts"
elif [ -f "lib/utils/ai.ts" ]; then
cat -n "lib/utils/ai.ts"
else
find . -name "ai.ts" -path "*/lib/utils/*" -type f
fiRepository: QueueLab/QCX
Length of output: 5327
🏁 Script executed:
#!/bin/bash
# Check mapbox_mcp/hooks.ts to verify the import statement
echo "=== Checking mapbox_mcp/hooks.ts imports and hook definition ==="
if [ -f "src/mapbox_mcp/hooks.ts" ]; then
head -30 "src/mapbox_mcp/hooks.ts"
elif [ -f "mapbox_mcp/hooks.ts" ]; then
head -30 "mapbox_mcp/hooks.ts"
else
find . -name "hooks.ts" -path "*/mapbox_mcp/*" -type f
fiRepository: QueueLab/QCX
Length of output: 830
🏁 Script executed:
#!/bin/bash
# Check if mapbox_mcp/hooks.ts has 'use client' directive
echo "=== Checking for 'use client' directive in mapbox_mcp/hooks.ts ==="
grep -n "use client\|use server" mapbox_mcp/hooks.ts || echo "No 'use client'/'use server' found"
echo ""
echo "=== Full mapbox_mcp/hooks.ts content ==="
cat mapbox_mcp/hooks.tsRepository: QueueLab/QCX
Length of output: 9833
🏁 Script executed:
#!/bin/bash
# Check for any references to mapbox_mcp/hooks in comments or config
echo "=== Searching for references to useMCPMapClient in comments/TODOs ==="
rg -i "useMCPMapClient|mapbox.*hook" --type=ts --type=tsx -B2 -A2
echo ""
echo "=== Check git history for this file ==="
git log --oneline -20 -- mapbox_mcp/hooks.ts || echo "Git history unavailable"Repository: QueueLab/QCX
Length of output: 279
🏁 Script executed:
#!/bin/bash
# Check if mapbox_mcp directory exists and what's in it
echo "=== Contents of mapbox_mcp directory ==="
find . -type d -name "mapbox_mcp" 2>/dev/null
echo ""
echo "=== All files in mapbox_mcp ==="
find . -path "*/mapbox_mcp/*" -type f 2>/dev/null
echo ""
echo "=== Check the commit that modified this file ==="
git show b358433 --stat || echo "Commit details unavailable"
echo ""
echo "=== Check what changed in that commit for hooks.ts ==="
git show b358433 -- "*hooks.ts" 2>/dev/null | head -100 || echo "Diff unavailable"Repository: QueueLab/QCX
Length of output: 18377
🏁 Script executed:
#!/bin/bash
# Look for any TODO or FIXME comments related to this file or mapbox
echo "=== Searching for TODO/FIXME related to mapbox or this hook ==="
rg -i "TODO|FIXME|WIP" mapbox_mcp/ 2>/dev/null || echo "No TODOs found"
echo ""
echo "=== Check if composio-mapbox exists and what it exports ==="
find . -name "composio-mapbox*" -type fRepository: QueueLab/QCX
Length of output: 227
Remove or refactor getModel import to avoid mixing client and server code in a client hook
useMCPMapClient uses useState, useCallback, useRef, and useEffect — client-only React primitives — yet Line 3 imports getModel from @/lib/utils/ai, which transitively imports server-only functions (getCurrentUserIdOnServer from next/headers and getSelectedModel from the database). While this hook is currently unused and doesn't cause a build error, importing server-only code in a client hook is an architectural issue that will break if any component imports this hook.
The existing WARNING comment already notes this should not be used directly in client components. Either call getModel via a server-side API route instead, or mark this file with 'use server' if it's intended only for server-side execution.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mapbox_mcp/hooks.ts` at line 3, The hook useMCPMapClient imports getModel
(from '@/lib/utils/ai') which transitively pulls server-only code; remove or
refactor that import to avoid mixing client and server code in this client hook:
either (a) remove getModel from this file and fetch model data via a server-side
API route or pass model data into useMCPMapClient as a parameter, or (b) if the
hook is intended only for server-side use, mark the file as server-only (e.g.,
add a server directive) and ensure no client React primitives
(useState/useEffect/useRef/useCallback) remain; update references to getModel
and the hook accordingly.
- Hoisted `getCurrentUserIdOnServer` import in `app/actions.tsx`. - Changed `userId` fallback from 'anonymous' to `undefined` for better memory isolation. - Refactored `getModel` to support `requireVision` and `requireStructuredOutput` flags. - Updated agents to request structured-output compatible models (avoiding Grok when necessary). - Fixed Supermemory tool configuration to use `containerTags: [userId]`. - Made Supermemory `addMemory` mode configurable via environment variables. - Made memory instructions in `Researcher` agent conditional on Supermemory availability. - Fixed typos in `Writer` agent system prompt. - Ensured all server-only AI utilities are in `lib/utils/ai.ts` to prevent client-side build errors. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
Integrated Supermemory AI SDK provider into the entire agent system.
Key changes:
PR created automatically by Jules for task 2963383581908392471 started by @ngoiyaeric
Summary by CodeRabbit