-
Notifications
You must be signed in to change notification settings - Fork 1
Bump ai-sdk deps and update AI configs #98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Bumped @AI-SDK packages and prettier; added ai-sdk-provider-opencode-sdk - Export new convexMemory using ModelRouterEmbeddingModel and semantic/workingMemory options - Change Lance/Mongo defaults to 'mastra_vectors' and set Lance embedding dimension to 3072; simplify Lance working memory template - Update Gemini CLI: use thinkingConfig, add verbose/logger options - Switch Google model calls from languageModel to chat (incl. image model) - Add ResearchPhase type and make researchPhase field type-safe - Add typed providerOptions for Google/OpenAI in mastra initialization
Learn moreAll Green is an AI agent that automatically: ✅ Addresses code review comments ✅ Fixes failing CI checks ✅ Resolves merge conflicts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
|
🤖 Hi @ssdeanx, I've received your request, and I'm working on it now! You can track my progress in the logs for more details. |
Reviewer's GuideUpdates AI-related dependencies and reworks several AI configuration modules: introduces a Convex-backed Memory instance with semantic/working memory options, aligns Lance/Mongo vector configs and working memory template, modernizes Gemini CLI and Google provider model usage to newer ai-sdk patterns, and improves type safety for research workflows and provider options. Class diagram for updated memory configs, Google/OpenAI provider options, and research runtime contextclassDiagram
class Memory {
+storage any
+vector any
+embedder ModelRouterEmbeddingModel
+options MemoryOptions
}
class MemoryOptions {
+number lastMessages
+boolean generateTitle
+SemanticRecallOptions semanticRecall
+WorkingMemoryOptions workingMemory
}
class SemanticRecallOptions {
+number topK
+MessageRangeOptions messageRange
+string scope
+IndexConfigOptions indexConfig
+number threshold
}
class MessageRangeOptions {
+number before
+number after
}
class IndexConfigOptions {
+string type
+string metric
+IvfOptions ivf
}
class IvfOptions {
+number lists
}
class WorkingMemoryOptions {
+boolean enabled
+string scope
+string version
+string template
}
class ConvexStore {
+string id
+string deploymentUrl
+string adminAuthToken
}
class ConvexVector {
+string id
+string deploymentUrl
+string adminAuthToken
}
class ModelRouterEmbeddingModel {
+string modelId
}
class LanceConfig {
+string dbPath
+string tableName
+number embeddingDimension
+ModelRouterEmbeddingModel embeddingModel
}
class MongoConfig {
+string uri
+string dbName
+string collectionName
+number embeddingDimension
+ModelRouterEmbeddingModel embeddingModel
}
class GeminiCliModelConfig {
+number contextWindow
+number maxTokens
+boolean supportsStreaming
+boolean verbose
+Logger logger
+ThinkingConfig thinkingConfig
+boolean codeexecution
+boolean structuredOutput
+boolean functionCalling
+boolean urlContext
+boolean grounding
}
class ThinkingConfig {
+string thinkingLevel
+number thinkingBudget
+boolean showThoughts
}
class Logger {
+log any
}
class GoogleChatModels {
+any gemini3Pro
+any gemini3Flash
+any gemini25Pro
+any gemini25Flash
+any gemini25FlashLite
+any gemini25ComputerUse
+any gemini25FlashAlt
+any gemini25FlashImage
}
class MastraInitConfig {
+ProviderOptions providerOptions
}
class ProviderOptions {
+GoogleGenerativeAIProviderOptions google
+OpenAIResponsesProviderOptions openai
}
class GoogleGenerativeAIProviderOptions {
}
class OpenAIResponsesProviderOptions {
}
class UserTier {
<<enumeration>>
free
pro
enterprise
}
class ResearchPhase {
<<enumeration>>
initial
followup
validation
}
class ResearchRuntimeContext {
+UserTier userTier
+string language
+string userId
+ResearchPhase researchPhase
}
Memory --> MemoryOptions
MemoryOptions --> SemanticRecallOptions
MemoryOptions --> WorkingMemoryOptions
SemanticRecallOptions --> MessageRangeOptions
SemanticRecallOptions --> IndexConfigOptions
IndexConfigOptions --> IvfOptions
Memory --> ConvexStore
Memory --> ConvexVector
Memory --> ModelRouterEmbeddingModel
LanceConfig --> ModelRouterEmbeddingModel
MongoConfig --> ModelRouterEmbeddingModel
GeminiCliModelConfig --> ThinkingConfig
GeminiCliModelConfig --> Logger
MastraInitConfig --> ProviderOptions
ProviderOptions --> GoogleGenerativeAIProviderOptions
ProviderOptions --> OpenAIResponsesProviderOptions
ResearchRuntimeContext --> UserTier
ResearchRuntimeContext --> ResearchPhase
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
Caution Review failedThe pull request is closed. Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings. WalkthroughThis PR updates package dependencies to newer versions, refines type definitions for research phases, restructures model configurations across Gemini and Google providers, introduces a new convexMemory export, and updates default collection/table names from 'governed\_rag' to 'mastra\_vectors'. Additionally, provider options are wired into route configuration. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (9)
✏️ Tip: You can disable this entire section by setting Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @ssdeanx, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request primarily focuses on updating and refining the AI infrastructure by bumping key AI-related dependencies and standardizing configurations across various AI models and memory stores. The changes aim to improve model interaction, enhance memory management capabilities, and bolster type safety within the AI agent's runtime context, ensuring a more robust and maintainable system. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
🤖 I'm sorry @ssdeanx, but I was unable to process your request. Please see the logs for more details. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request includes several important updates across the AI SDK dependencies and configurations. Key changes include bumping @ai-sdk packages and prettier, and adding the ai-sdk-provider-opencode-sdk. The ResearchPhase type has been introduced for improved type safety in researchAgent.ts. The convexMemory configuration has been significantly expanded with detailed options for message management, semantic recall, and working memory. The Gemini CLI configurations now utilize thinkingConfig and include verbose/logger options for better control and debugging. All Google model calls have been switched from languageModel to chat, including for image models, aligning with updated API usage. Furthermore, the default table/collection names for LanceDB and MongoDB have been updated to 'mastra_vectors', and Lance's embedding dimension is now 3072. Finally, typed providerOptions for Google and OpenAI have been added to the chatRoute in mastra/index.ts, enhancing type safety and configurability for AI providers.
| // Additional variants | ||
| gemini25FlashAlt: google.languageModel('gemini-2.5-flash-preview-09-2025'), | ||
| gemini25FlashAlt: google.chat('gemini-2.5-flash-preview-09-2025'), | ||
| //gemini: ('google/gemini-2.5-flash-preview-09-2025'), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey - I've found 1 issue, and left some high level feedback:
- In
convexMemorythe comment still refers toPgVectorand a flat index even though the backend isConvexVector; consider updating or removing these copy-pasted comments (and the partially commented-outindexConfig/indexNameblock) to avoid confusion about the actual vector store and index configuration being used. - The
providerOptionsobject inmastrais currently initialized as empty objects cast toGoogleGenerativeAIProviderOptionsandOpenAIResponsesProviderOptions; it would be more type-safe to either make these options truly optional or construct minimal valid configs instead of relying on type assertions with{}.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `convexMemory` the comment still refers to `PgVector` and a flat index even though the backend is `ConvexVector`; consider updating or removing these copy-pasted comments (and the partially commented-out `indexConfig`/`indexName` block) to avoid confusion about the actual vector store and index configuration being used.
- The `providerOptions` object in `mastra` is currently initialized as empty objects cast to `GoogleGenerativeAIProviderOptions` and `OpenAIResponsesProviderOptions`; it would be more type-safe to either make these options truly optional or construct minimal valid configs instead of relying on type assertions with `{}`.
## Individual Comments
### Comment 1
<location> `src/mastra/config/google.ts:42` </location>
<code_context>
export const googleImageModels = {
// Gemini Flash Image model
- gemini25FlashImage: google.languageModel('gemini-2.5-flash-image'),
+ gemini25FlashImage: google.chat('gemini-2.5-flash-image'),
// gemini3ProImage: google('gemini-3-pro-image-preview'),
// Imagen 4.0 models
</code_context>
<issue_to_address>
**issue (bug_risk):** Using `google.chat` for an image-oriented model ID is likely the wrong factory and may break image generation.
This model ID appears to be for image generation, but it’s now created via `google.chat` instead of `google.image`. That mismatch is likely to cause runtime or capability errors when generating images. Please switch to the appropriate image factory, e.g. `google.image('gemini-2.5-flash-image')`, or whichever factory the SDK specifies for this model.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| export const googleImageModels = { | ||
| // Gemini Flash Image model | ||
| gemini25FlashImage: google.languageModel('gemini-2.5-flash-image'), | ||
| gemini25FlashImage: google.chat('gemini-2.5-flash-image'), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (bug_risk): Using google.chat for an image-oriented model ID is likely the wrong factory and may break image generation.
This model ID appears to be for image generation, but it’s now created via google.chat instead of google.image. That mismatch is likely to cause runtime or capability errors when generating images. Please switch to the appropriate image factory, e.g. google.image('gemini-2.5-flash-image'), or whichever factory the SDK specifies for this model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR updates AI SDK dependencies and makes configuration improvements across multiple files. The changes focus on standardizing vector storage naming conventions, improving type safety, updating AI model configurations, and exporting a new memory instance for Convex.
Changes:
- Bumped @AI-SDK packages (google, google-vertex, openai, openai-compatible, react) and prettier to latest versions; added ai-sdk-provider-opencode-sdk dependency
- Exported convexMemory with ModelRouterEmbeddingModel and semantic/working memory configuration
- Updated default collection/table names to 'mastra_vectors' for MongoDB and Lance; changed Lance embedding dimension from 1536 to 3072 and simplified working memory template
- Refactored Gemini CLI config to use nested thinkingConfig object and added verbose/logger options
- Switched Google model API calls from languageModel() to chat() method
- Added ResearchPhase type and typed providerOptions for Google/OpenAI in mastra initialization
Reviewed changes
Copilot reviewed 8 out of 10 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| package.json | Bumped @AI-SDK packages and prettier; added ai-sdk-provider-opencode-sdk |
| package-lock.json | Updated lockfile with new dependency versions and integrity hashes |
| src/mastra/config/mongodb.ts | Changed default collection name to 'mastra_vectors' |
| src/mastra/config/lance.ts | Updated table name to 'mastra_vectors', embedding dimension to 3072, simplified working memory template |
| src/mastra/config/convex.ts | Added convexMemory export with full semantic and working memory configuration |
| src/mastra/config/gemini-cli.ts | Restructured thinking options into thinkingConfig object; added verbose/logger options; changed import from logError to log |
| src/mastra/config/google.ts | Changed all model methods from languageModel() to chat() |
| src/mastra/agents/researchAgent.ts | Added ResearchPhase type for type-safe researchPhase field |
| src/mastra/index.ts | Added typed providerOptions for google and openai providers |
| src/mastra/config/opencode.ts | Empty file (new dependency added but not implemented) |
|
|
||
| export const convexMemory = new Memory({ | ||
| storage: storageCon, | ||
| vector: vectorCon, // Using PgVector with flat for 3072 dimension embeddings (gemini-embedding-001) |
Copilot
AI
Jan 15, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment incorrectly refers to "PgVector" when this configuration is using ConvexVector (as defined on line 11). Update the comment to accurately reflect the vector store being used.
| vector: vectorCon, // Using PgVector with flat for 3072 dimension embeddings (gemini-embedding-001) | |
| vector: vectorCon, // Using ConvexVector for 3072 dimension embeddings (gemini-embedding-001) |
| import { createGeminiProvider } from 'ai-sdk-provider-gemini-cli' | ||
| import { logError } from './logger' | ||
| import os from 'os' | ||
| import { log } from './logger' |
Copilot
AI
Jan 15, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The import was changed from 'logError' to 'log', but the commented line 13 still references 'os.homedir()' while the 'os' import was removed. If the cacheDir option is intended to be used in the future, the 'os' import should be retained (even if commented), or the comment should be updated to remove the os.homedir() reference.

Summary by Sourcery
Update AI configuration, memory backends, and provider typings while aligning Google/Gemini integrations with the latest ai-sdk APIs.
New Features:
Enhancements:
Build: