feat: add AI companion chat panel for protocol Q&A (#584)#587
feat: add AI companion chat panel for protocol Q&A (#584)#587
Conversation
Implement a floating chat panel that lets users ask natural language
questions about Livepeer protocol data, backed by real-time data from
The Graph subgraph and existing Explorer API routes.
Architecture:
- Gemini 2.5 Flash via Vercel AI SDK with streaming responses
- 9 predefined read-only tools (orchestrators, delegators, protocol
stats, current round, performance, AI usage, events, treasury)
- Semantic caching via Upstash Vector (5-min TTL, cosine similarity)
- Rate limiting via Upstash Redis (20 req/min per IP)
- All tool parameters Zod-validated, no raw user queries
New files:
- pages/api/ai/chat.ts - streaming chat API route
- lib/ai/{config,ratelimit,cache}.ts - infrastructure
- lib/ai/tools/*.ts - 9 data-fetching tools
- components/AiChat/* - FAB, chat panel, message thread, renderers
Modified files:
- hooks/useExplorerStore.tsx - added aiChatOpen state
- layouts/main.tsx - mounted AiChat component
- .env.example - added AI/caching env vars
- package.json - added ai, @ai-sdk/google, @ai-sdk/react, upstash deps
https://claude.ai/code/session_01XFuaKpyHwtpin16qR7dt9P
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
There was a problem hiding this comment.
Pull request overview
Adds an in-app AI companion chat experience to the Livepeer Explorer, backed by a new streaming API route and a small set of read-only, Zod-validated “tools” that fetch protocol data from the existing subgraph/API infrastructure.
Changes:
- Introduces
/api/ai/chatstreaming endpoint with rate limiting and semantic caching hooks. - Adds 9 AI tools for protocol Q&A (orchestrators, delegators, protocol stats, round info, performance, AI usage, events, treasury).
- Mounts a floating AI chat UI (FAB + panel + message renderers) into the main layout and adds store state for panel open/close.
Reviewed changes
Copilot reviewed 27 out of 28 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| pnpm-lock.yaml | Locks new AI SDK + Upstash dependencies and transitive changes. |
| package.json | Adds ai, @ai-sdk/*, and Upstash packages required for chat, caching, and rate limiting. |
| .env.example | Documents new optional env vars for Gemini + Upstash Redis/Vector. |
| pages/api/ai/chat.ts | New streaming chat API route with rate limiting and semantic cache lookup/write. |
| lib/ai/config.ts | Defines Gemini model selection and system prompt. |
| lib/ai/ratelimit.ts | Upstash Redis sliding-window rate limiter wrapper. |
| lib/ai/cache.ts | Upstash Vector semantic cache wrapper with TTL logic. |
| lib/ai/tools/index.ts | Barrel exports for the AI tools. |
| lib/ai/tools/get-orchestrators.ts | Tool to fetch and render a ranked orchestrator table. |
| lib/ai/tools/get-orchestrator.ts | Tool to fetch and render single-orchestrator stats. |
| lib/ai/tools/get-delegator.ts | Tool to fetch and render delegator staking position stats. |
| lib/ai/tools/get-protocol.ts | Tool to fetch and render protocol-level stats. |
| lib/ai/tools/get-current-round.ts | Tool to fetch and render current round info. |
| lib/ai/tools/get-performance.ts | Tool to fetch and render orchestrator performance leaderboard data. |
| lib/ai/tools/get-ai-usage.ts | Tool to fetch and render AI pipeline usage overview and per-orchestrator metrics. |
| lib/ai/tools/get-events.ts | Tool to fetch and render recent protocol events into a table. |
| lib/ai/tools/get-treasury.ts | Tool to fetch and render treasury proposals as a table. |
| hooks/useExplorerStore.tsx | Adds aiChatOpen state and setter. |
| layouts/main.tsx | Mounts the new <AiChat /> component in the main layout. |
| components/AiChat/index.tsx | FAB + panel open/close wiring via the explorer store. |
| components/AiChat/ChatPanel.tsx | Panel container using useChat + DefaultChatTransport. |
| components/AiChat/ChatInput.tsx | Textarea input with Enter-to-send and disabled/loading states. |
| components/AiChat/MessageThread.tsx | Message list rendering + auto-scroll behavior. |
| components/AiChat/MessageBubble.tsx | Renders message text and tool result parts (table/stats/chart/error). |
| components/AiChat/SuggestedQuestions.tsx | “Try asking” clickable suggestion list. |
| components/AiChat/renderers/TableRenderer.tsx | Table UI renderer for tool results. |
| components/AiChat/renderers/StatsCard.tsx | Stats card UI renderer for tool results. |
| components/AiChat/renderers/ChartRenderer.tsx | Chart UI renderer for tool results via Recharts. |
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| {suggestions.map((q) => ( | ||
| <Box | ||
| key={q} | ||
| onClick={() => onSelect(q)} | ||
| css={{ | ||
| padding: "$2 $3", | ||
| borderRadius: "$2", | ||
| border: "1px solid $neutral6", | ||
| cursor: "pointer", | ||
| transition: "background-color 0.15s", | ||
| "&:hover": { | ||
| backgroundColor: "$neutral3", | ||
| }, | ||
| }} | ||
| > | ||
| <Text size="2">{q}</Text> | ||
| </Box> |
There was a problem hiding this comment.
The suggestion items are clickable Box elements with onClick, but they aren’t semantic buttons/links (no keyboard focus, no Enter/Space activation), which hurts accessibility. Render them as button elements (or add role="button", tabIndex=0, and key handlers) so keyboard and screen-reader users can use the suggestions.
| orderBy: | ||
| sortBy === "stake" | ||
| ? ("totalStake" as never) | ||
| : ("thirtyDayVolumeETH" as never), | ||
| orderDirection: "desc" as never, | ||
| }, |
There was a problem hiding this comment.
The query variables use multiple as never casts for orderBy/orderDirection, which disables type-safety and can hide real schema mismatches. Since the generated schema exports Transcoder_OrderBy (and OrderDirection), prefer using those enum values directly to keep the query strongly typed.
| "Total Unbonding (LPT)": pendingUnbonds | ||
| .reduce((sum, lock) => sum + Number(lock?.amount ?? 0), 0) | ||
| .toFixed(2), |
There was a problem hiding this comment.
Total Unbonding (LPT) is summing raw lock.amount values but never divides by 1e18, so the displayed number will be in wei (and wildly too large) while the label says LPT. Convert the summed amount to LPT (or format units) before calling toFixed/displaying it.
| (Number(o.totalStake) / 1e18).toFixed(2), | ||
| Number(o.thirtyDayVolumeETH).toFixed(4), | ||
| `${(Number(o.feeShare) / 10000).toFixed(2)}%`, | ||
| `${(Number(o.rewardCut) / 10000).toFixed(2)}%`, | ||
| o.delegators?.length ?? 0, |
There was a problem hiding this comment.
feeShare/rewardCut scaling looks inconsistent with the rest of the codebase (existing UI logic divides these values by 1,000,000 to get a percentage). Dividing by 10,000 here will produce incorrect percentages. Align the divisor with how feeShare/rewardCut are represented in the subgraph and elsewhere in the app.
| "Fee Share": `${(Number(transcoder.feeShare) / 10000).toFixed(2)}%`, | ||
| "Reward Cut": `${(Number(transcoder.rewardCut) / 10000).toFixed(2)}%`, |
There was a problem hiding this comment.
feeShare/rewardCut are divided by 10,000 when formatting percentages, but elsewhere in the repo these values are treated as 1e6-scaled (divide by 1,000,000). Using 10,000 will show incorrect values to users. Update the conversion to match the actual scale used by the subgraph/app.
| const e = event as { delegate?: { id: string }; feeShare?: string; rewardCut?: string }; | ||
| details = `${e.delegate?.id ?? "?"} updated: fee share ${(Number(e.feeShare ?? 0) / 10000).toFixed(2)}%, reward cut ${(Number(e.rewardCut ?? 0) / 10000).toFixed(2)}%`; | ||
| break; |
There was a problem hiding this comment.
In TranscoderUpdateEvent details, feeShare/rewardCut are divided by 10,000 to compute percentages, but the rest of the app treats these fields as 1e6-scaled. This will render incorrect fee/reward percentages in the event feed output. Use the same scaling factor as elsewhere.
| if (userQuery) { | ||
| const cached = await getCachedResponse(userQuery); | ||
| if (cached) { | ||
| res.setHeader("X-Cache", "HIT"); | ||
| res.setHeader("Content-Type", "text/plain; charset=utf-8"); | ||
| res.write(cached); | ||
| return res.end(); | ||
| } |
There was a problem hiding this comment.
The cache HIT path writes a plain text body (res.write(cached)) while the non-cached path returns an AI SDK UI message stream (pipeUIMessageStreamToResponse). If the client transport expects the UI message stream protocol, cache hits will fail to parse / not update useChat state correctly. Consider caching and replaying the same stream format, or disable the semantic cache for this endpoint until it can return a protocol-compatible response.
| await index.upsert({ | ||
| id: `chat-${Date.now()}`, | ||
| data: question, | ||
| metadata: { response, ts: Date.now() }, | ||
| }); |
There was a problem hiding this comment.
setCachedResponse uses id: \chat-${Date.now()}`` for every entry. This prevents overwriting/updating existing cache items and will cause the vector index to grow without bound (TTL is only enforced in metadata at read time). Use a stable deterministic id (e.g., hash of the normalized question) and/or explicitly delete/expire old items to avoid unbounded index growth and cost.
|
|
||
| const response = await fetchWithRetry( | ||
| `${baseUrl}/api/aggregated_stats` | ||
| ).then((res) => res.json()); |
There was a problem hiding this comment.
| "Vote End", | ||
| ], | ||
| rows: proposals.map((p) => [ | ||
| p.id.slice(0, 10) + "...", |
|
|
||
| const result = streamText({ | ||
| model, | ||
| system: systemPrompt, |
Implement a floating chat panel that lets users ask natural language
questions about Livepeer protocol data, backed by real-time data from
The Graph subgraph and existing Explorer API routes.
Architecture:
stats, current round, performance, AI usage, events, treasury)
New files:
Modified files:
https://claude.ai/code/session_01XFuaKpyHwtpin16qR7dt9P