Skip to content

feat: add AI companion chat panel for protocol Q&A (#584)#587

Open
rickstaa wants to merge 1 commit intomainfrom
claude/issue-584-implementation-5wk8z
Open

feat: add AI companion chat panel for protocol Q&A (#584)#587
rickstaa wants to merge 1 commit intomainfrom
claude/issue-584-implementation-5wk8z

Conversation

@rickstaa
Copy link
Member

Implement a floating chat panel that lets users ask natural language
questions about Livepeer protocol data, backed by real-time data from
The Graph subgraph and existing Explorer API routes.

Architecture:

  • Gemini 2.5 Flash via Vercel AI SDK with streaming responses
  • 9 predefined read-only tools (orchestrators, delegators, protocol
    stats, current round, performance, AI usage, events, treasury)
  • Semantic caching via Upstash Vector (5-min TTL, cosine similarity)
  • Rate limiting via Upstash Redis (20 req/min per IP)
  • All tool parameters Zod-validated, no raw user queries

New files:

  • pages/api/ai/chat.ts - streaming chat API route
  • lib/ai/{config,ratelimit,cache}.ts - infrastructure
  • lib/ai/tools/*.ts - 9 data-fetching tools
  • components/AiChat/* - FAB, chat panel, message thread, renderers

Modified files:

  • hooks/useExplorerStore.tsx - added aiChatOpen state
  • layouts/main.tsx - mounted AiChat component
  • .env.example - added AI/caching env vars
  • package.json - added ai, @ai-sdk/google, @ai-sdk/react, upstash deps

https://claude.ai/code/session_01XFuaKpyHwtpin16qR7dt9P

Implement a floating chat panel that lets users ask natural language
questions about Livepeer protocol data, backed by real-time data from
The Graph subgraph and existing Explorer API routes.

Architecture:
- Gemini 2.5 Flash via Vercel AI SDK with streaming responses
- 9 predefined read-only tools (orchestrators, delegators, protocol
  stats, current round, performance, AI usage, events, treasury)
- Semantic caching via Upstash Vector (5-min TTL, cosine similarity)
- Rate limiting via Upstash Redis (20 req/min per IP)
- All tool parameters Zod-validated, no raw user queries

New files:
- pages/api/ai/chat.ts - streaming chat API route
- lib/ai/{config,ratelimit,cache}.ts - infrastructure
- lib/ai/tools/*.ts - 9 data-fetching tools
- components/AiChat/* - FAB, chat panel, message thread, renderers

Modified files:
- hooks/useExplorerStore.tsx - added aiChatOpen state
- layouts/main.tsx - mounted AiChat component
- .env.example - added AI/caching env vars
- package.json - added ai, @ai-sdk/google, @ai-sdk/react, upstash deps

https://claude.ai/code/session_01XFuaKpyHwtpin16qR7dt9P
Copilot AI review requested due to automatic review settings March 10, 2026 00:27
@rickstaa rickstaa requested a review from ECWireless as a code owner March 10, 2026 00:27
@vercel
Copy link
Contributor

vercel bot commented Mar 10, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
explorer-arbitrum-one Ready Ready Preview, Comment Mar 10, 2026 0:27am

Request Review

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds an in-app AI companion chat experience to the Livepeer Explorer, backed by a new streaming API route and a small set of read-only, Zod-validated “tools” that fetch protocol data from the existing subgraph/API infrastructure.

Changes:

  • Introduces /api/ai/chat streaming endpoint with rate limiting and semantic caching hooks.
  • Adds 9 AI tools for protocol Q&A (orchestrators, delegators, protocol stats, round info, performance, AI usage, events, treasury).
  • Mounts a floating AI chat UI (FAB + panel + message renderers) into the main layout and adds store state for panel open/close.

Reviewed changes

Copilot reviewed 27 out of 28 changed files in this pull request and generated 8 comments.

Show a summary per file
File Description
pnpm-lock.yaml Locks new AI SDK + Upstash dependencies and transitive changes.
package.json Adds ai, @ai-sdk/*, and Upstash packages required for chat, caching, and rate limiting.
.env.example Documents new optional env vars for Gemini + Upstash Redis/Vector.
pages/api/ai/chat.ts New streaming chat API route with rate limiting and semantic cache lookup/write.
lib/ai/config.ts Defines Gemini model selection and system prompt.
lib/ai/ratelimit.ts Upstash Redis sliding-window rate limiter wrapper.
lib/ai/cache.ts Upstash Vector semantic cache wrapper with TTL logic.
lib/ai/tools/index.ts Barrel exports for the AI tools.
lib/ai/tools/get-orchestrators.ts Tool to fetch and render a ranked orchestrator table.
lib/ai/tools/get-orchestrator.ts Tool to fetch and render single-orchestrator stats.
lib/ai/tools/get-delegator.ts Tool to fetch and render delegator staking position stats.
lib/ai/tools/get-protocol.ts Tool to fetch and render protocol-level stats.
lib/ai/tools/get-current-round.ts Tool to fetch and render current round info.
lib/ai/tools/get-performance.ts Tool to fetch and render orchestrator performance leaderboard data.
lib/ai/tools/get-ai-usage.ts Tool to fetch and render AI pipeline usage overview and per-orchestrator metrics.
lib/ai/tools/get-events.ts Tool to fetch and render recent protocol events into a table.
lib/ai/tools/get-treasury.ts Tool to fetch and render treasury proposals as a table.
hooks/useExplorerStore.tsx Adds aiChatOpen state and setter.
layouts/main.tsx Mounts the new <AiChat /> component in the main layout.
components/AiChat/index.tsx FAB + panel open/close wiring via the explorer store.
components/AiChat/ChatPanel.tsx Panel container using useChat + DefaultChatTransport.
components/AiChat/ChatInput.tsx Textarea input with Enter-to-send and disabled/loading states.
components/AiChat/MessageThread.tsx Message list rendering + auto-scroll behavior.
components/AiChat/MessageBubble.tsx Renders message text and tool result parts (table/stats/chart/error).
components/AiChat/SuggestedQuestions.tsx “Try asking” clickable suggestion list.
components/AiChat/renderers/TableRenderer.tsx Table UI renderer for tool results.
components/AiChat/renderers/StatsCard.tsx Stats card UI renderer for tool results.
components/AiChat/renderers/ChartRenderer.tsx Chart UI renderer for tool results via Recharts.
Files not reviewed (1)
  • pnpm-lock.yaml: Language not supported

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +25 to +41
{suggestions.map((q) => (
<Box
key={q}
onClick={() => onSelect(q)}
css={{
padding: "$2 $3",
borderRadius: "$2",
border: "1px solid $neutral6",
cursor: "pointer",
transition: "background-color 0.15s",
"&:hover": {
backgroundColor: "$neutral3",
},
}}
>
<Text size="2">{q}</Text>
</Box>
Copy link

Copilot AI Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The suggestion items are clickable Box elements with onClick, but they aren’t semantic buttons/links (no keyboard focus, no Enter/Space activation), which hurts accessibility. Render them as button elements (or add role="button", tabIndex=0, and key handlers) so keyboard and screen-reader users can use the suggestions.

Copilot uses AI. Check for mistakes.
Comment on lines +54 to +59
orderBy:
sortBy === "stake"
? ("totalStake" as never)
: ("thirtyDayVolumeETH" as never),
orderDirection: "desc" as never,
},
Copy link

Copilot AI Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The query variables use multiple as never casts for orderBy/orderDirection, which disables type-safety and can hide real schema mismatches. Since the generated schema exports Transcoder_OrderBy (and OrderDirection), prefer using those enum values directly to keep the query strongly typed.

Copilot uses AI. Check for mistakes.
Comment on lines +55 to +57
"Total Unbonding (LPT)": pendingUnbonds
.reduce((sum, lock) => sum + Number(lock?.amount ?? 0), 0)
.toFixed(2),
Copy link

Copilot AI Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Total Unbonding (LPT) is summing raw lock.amount values but never divides by 1e18, so the displayed number will be in wei (and wildly too large) while the label says LPT. Convert the summed amount to LPT (or format units) before calling toFixed/displaying it.

Copilot uses AI. Check for mistakes.
Comment on lines +79 to +83
(Number(o.totalStake) / 1e18).toFixed(2),
Number(o.thirtyDayVolumeETH).toFixed(4),
`${(Number(o.feeShare) / 10000).toFixed(2)}%`,
`${(Number(o.rewardCut) / 10000).toFixed(2)}%`,
o.delegators?.length ?? 0,
Copy link

Copilot AI Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

feeShare/rewardCut scaling looks inconsistent with the rest of the codebase (existing UI logic divides these values by 1,000,000 to get a percentage). Dividing by 10,000 here will produce incorrect percentages. Align the divisor with how feeShare/rewardCut are represented in the subgraph and elsewhere in the app.

Copilot uses AI. Check for mistakes.
Comment on lines +48 to +49
"Fee Share": `${(Number(transcoder.feeShare) / 10000).toFixed(2)}%`,
"Reward Cut": `${(Number(transcoder.rewardCut) / 10000).toFixed(2)}%`,
Copy link

Copilot AI Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

feeShare/rewardCut are divided by 10,000 when formatting percentages, but elsewhere in the repo these values are treated as 1e6-scaled (divide by 1,000,000). Using 10,000 will show incorrect values to users. Update the conversion to match the actual scale used by the subgraph/app.

Copilot uses AI. Check for mistakes.
Comment on lines +72 to +74
const e = event as { delegate?: { id: string }; feeShare?: string; rewardCut?: string };
details = `${e.delegate?.id ?? "?"} updated: fee share ${(Number(e.feeShare ?? 0) / 10000).toFixed(2)}%, reward cut ${(Number(e.rewardCut ?? 0) / 10000).toFixed(2)}%`;
break;
Copy link

Copilot AI Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In TranscoderUpdateEvent details, feeShare/rewardCut are divided by 10,000 to compute percentages, but the rest of the app treats these fields as 1e6-scaled. This will render incorrect fee/reward percentages in the event feed output. Use the same scaling factor as elsewhere.

Copilot uses AI. Check for mistakes.
Comment on lines +65 to +72
if (userQuery) {
const cached = await getCachedResponse(userQuery);
if (cached) {
res.setHeader("X-Cache", "HIT");
res.setHeader("Content-Type", "text/plain; charset=utf-8");
res.write(cached);
return res.end();
}
Copy link

Copilot AI Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cache HIT path writes a plain text body (res.write(cached)) while the non-cached path returns an AI SDK UI message stream (pipeUIMessageStreamToResponse). If the client transport expects the UI message stream protocol, cache hits will fail to parse / not update useChat state correctly. Consider caching and replaying the same stream format, or disable the semantic cache for this endpoint until it can return a protocol-compatible response.

Copilot uses AI. Check for mistakes.
Comment on lines +59 to +63
await index.upsert({
id: `chat-${Date.now()}`,
data: question,
metadata: { response, ts: Date.now() },
});
Copy link

Copilot AI Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

setCachedResponse uses id: \chat-${Date.now()}`` for every entry. This prevents overwriting/updating existing cache items and will cause the vector index to grow without bound (TTL is only enforced in metadata at read time). Use a stable deterministic id (e.g., hash of the normalized question) and/or explicitly delete/expire old items to avoid unbounded index growth and cost.

Copilot uses AI. Check for mistakes.

const response = await fetchWithRetry(
`${baseUrl}/api/aggregated_stats`
).then((res) => res.json());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
).then((res) => res.json());
)
.then((res) => res.json())
.catch(() => ({}));

Missing error handling on fetchWithRetry call in getPerformanceTool causes unhandled errors that crash the tool execution if the metrics server is down or returns invalid JSON

Fix on Vercel

"Vote End",
],
rows: proposals.map((p) => [
p.id.slice(0, 10) + "...",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
p.id.slice(0, 10) + "...",
(p.id ?? "Unknown").slice(0, 10) + "...",

Missing null/undefined check on p.id causes inconsistent defensive programming pattern and potential runtime error if id is null

Fix on Vercel


const result = streamText({
model,
system: systemPrompt,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing error handling in AI chat API causes incomplete responses when AI API fails or streaming encounters errors

Fix on Vercel

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants