Skip to content

Conversation

@ssdeanx
Copy link
Owner

@ssdeanx ssdeanx commented Jan 15, 2026

  • Bumped @AI-SDK packages and prettier; added ai-sdk-provider-opencode-sdk
  • Export new convexMemory using ModelRouterEmbeddingModel and semantic/workingMemory options
  • Change Lance/Mongo defaults to 'mastra_vectors' and set Lance embedding dimension to 3072; simplify Lance working memory template
  • Update Gemini CLI: use thinkingConfig, add verbose/logger options
  • Switch Google model calls from languageModel to chat (incl. image model)
  • Add ResearchPhase type and make researchPhase field type-safe
  • Add typed providerOptions for Google/OpenAI in mastra initialization

Summary by Sourcery

Update AI configuration, memory backends, and provider typings while aligning Google/Gemini integrations with the latest ai-sdk APIs.

New Features:

  • Expose a new convexMemory instance wired to Convex storage/vector backends with configurable semantic and working memory options.
  • Add verbose logging and configurable thinking behavior to Gemini CLI models via thinkingConfig and logger hooks.
  • Introduce typed providerOptions for Google and OpenAI in Mastra initialization to support strongly-typed model configuration.

Enhancements:

  • Standardize Lance and MongoDB vector storage defaults to mastra_vectors and update Lance to use 3072-dimensional embeddings with a simplified working memory template.
  • Switch Google model integrations from languageModel to chat (including image-capable models) to match newer ai-sdk behavior.
  • Strengthen the research agent context by introducing a dedicated ResearchPhase type for the researchPhase field.

Build:

  • Bump @AI-SDK Google/OpenAI-related packages and @ai-sdk/react to the latest minor versions and upgrade prettier.
  • Add ai-sdk-provider-opencode-sdk as a new dependency.

- Bumped @AI-SDK packages and prettier; added
  ai-sdk-provider-opencode-sdk
- Export new convexMemory using ModelRouterEmbeddingModel and
  semantic/workingMemory options
- Change Lance/Mongo defaults to 'mastra_vectors' and set Lance
  embedding
  dimension to 3072; simplify Lance working memory template
- Update Gemini CLI: use thinkingConfig, add verbose/logger options
- Switch Google model calls from languageModel to chat (incl. image
  model)
- Add ResearchPhase type and make researchPhase field type-safe
- Add typed providerOptions for Google/OpenAI in mastra initialization
Copilot AI review requested due to automatic review settings January 15, 2026 17:14
@continue
Copy link

continue bot commented Jan 15, 2026

All Green - Keep your PRs mergeable

Learn more

All Green is an AI agent that automatically:

✅ Addresses code review comments

✅ Fixes failing CI checks

✅ Resolves merge conflicts


Unsubscribe from All Green comments

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.

@github-actions
Copy link

🤖 Hi @ssdeanx, I've received your request, and I'm working on it now! You can track my progress in the logs for more details.

@sourcery-ai
Copy link

sourcery-ai bot commented Jan 15, 2026

Reviewer's Guide

Updates AI-related dependencies and reworks several AI configuration modules: introduces a Convex-backed Memory instance with semantic/working memory options, aligns Lance/Mongo vector configs and working memory template, modernizes Gemini CLI and Google provider model usage to newer ai-sdk patterns, and improves type safety for research workflows and provider options.

Class diagram for updated memory configs, Google/OpenAI provider options, and research runtime context

classDiagram

    class Memory {
        +storage any
        +vector any
        +embedder ModelRouterEmbeddingModel
        +options MemoryOptions
    }

    class MemoryOptions {
        +number lastMessages
        +boolean generateTitle
        +SemanticRecallOptions semanticRecall
        +WorkingMemoryOptions workingMemory
    }

    class SemanticRecallOptions {
        +number topK
        +MessageRangeOptions messageRange
        +string scope
        +IndexConfigOptions indexConfig
        +number threshold
    }

    class MessageRangeOptions {
        +number before
        +number after
    }

    class IndexConfigOptions {
        +string type
        +string metric
        +IvfOptions ivf
    }

    class IvfOptions {
        +number lists
    }

    class WorkingMemoryOptions {
        +boolean enabled
        +string scope
        +string version
        +string template
    }

    class ConvexStore {
        +string id
        +string deploymentUrl
        +string adminAuthToken
    }

    class ConvexVector {
        +string id
        +string deploymentUrl
        +string adminAuthToken
    }

    class ModelRouterEmbeddingModel {
        +string modelId
    }

    class LanceConfig {
        +string dbPath
        +string tableName
        +number embeddingDimension
        +ModelRouterEmbeddingModel embeddingModel
    }

    class MongoConfig {
        +string uri
        +string dbName
        +string collectionName
        +number embeddingDimension
        +ModelRouterEmbeddingModel embeddingModel
    }

    class GeminiCliModelConfig {
        +number contextWindow
        +number maxTokens
        +boolean supportsStreaming
        +boolean verbose
        +Logger logger
        +ThinkingConfig thinkingConfig
        +boolean codeexecution
        +boolean structuredOutput
        +boolean functionCalling
        +boolean urlContext
        +boolean grounding
    }

    class ThinkingConfig {
        +string thinkingLevel
        +number thinkingBudget
        +boolean showThoughts
    }

    class Logger {
        +log any
    }

    class GoogleChatModels {
        +any gemini3Pro
        +any gemini3Flash
        +any gemini25Pro
        +any gemini25Flash
        +any gemini25FlashLite
        +any gemini25ComputerUse
        +any gemini25FlashAlt
        +any gemini25FlashImage
    }

    class MastraInitConfig {
        +ProviderOptions providerOptions
    }

    class ProviderOptions {
        +GoogleGenerativeAIProviderOptions google
        +OpenAIResponsesProviderOptions openai
    }

    class GoogleGenerativeAIProviderOptions {
    }

    class OpenAIResponsesProviderOptions {
    }

    class UserTier {
        <<enumeration>>
        free
        pro
        enterprise
    }

    class ResearchPhase {
        <<enumeration>>
        initial
        followup
        validation
    }

    class ResearchRuntimeContext {
        +UserTier userTier
        +string language
        +string userId
        +ResearchPhase researchPhase
    }

    Memory --> MemoryOptions
    MemoryOptions --> SemanticRecallOptions
    MemoryOptions --> WorkingMemoryOptions
    SemanticRecallOptions --> MessageRangeOptions
    SemanticRecallOptions --> IndexConfigOptions
    IndexConfigOptions --> IvfOptions

    Memory --> ConvexStore
    Memory --> ConvexVector
    Memory --> ModelRouterEmbeddingModel

    LanceConfig --> ModelRouterEmbeddingModel
    MongoConfig --> ModelRouterEmbeddingModel

    GeminiCliModelConfig --> ThinkingConfig
    GeminiCliModelConfig --> Logger

    MastraInitConfig --> ProviderOptions
    ProviderOptions --> GoogleGenerativeAIProviderOptions
    ProviderOptions --> OpenAIResponsesProviderOptions

    ResearchRuntimeContext --> UserTier
    ResearchRuntimeContext --> ResearchPhase
Loading

File-Level Changes

Change Details Files
Introduce Convex-backed Memory configuration using ModelRouterEmbeddingModel with semantic and working memory options.
  • Rename ConvexStore and ConvexVector instances to storageCon and vectorCon and wire them into a new Memory instance export.
  • Configure the Memory embedder with ModelRouterEmbeddingModel using the google/gemini-embedding-001 embedding model.
  • Add configurable semanticRecall options (topK, messageRange, scope, threshold, and commented-out index configuration) sourced from environment variables.
  • Define a concise workingMemory configuration with a structured template focused on user context and session notes.
src/mastra/config/convex.ts
Update Gemini CLI models to use thinkingConfig and shared logging, aligning with new ai-sdk-provider-gemini-cli options.
  • Replace logError/os usage with a shared log import from the local logger module.
  • Wrap thinking-related settings into thinkingConfig objects for each Gemini model instead of top-level thinkingBudget/showThoughts fields.
  • Enable verbose mode and attach the shared logger for applicable Gemini CLI models.
  • Adjust individual model configs (e.g., thinkingLevel and showThoughts) to match the new configuration structure.
src/mastra/config/gemini-cli.ts
Align Lance and MongoDB memory/vector store defaults and simplify Lance working memory template.
  • Change Lance vector table default name to mastra_vectors and increase default embedding dimension to 3072.
  • Replace the verbose Lance working memory template with a shorter assistant-centric session scratchpad format.
  • Update MongoDB default collection name from governed_rag to mastra_vectors to align with Lance defaults.
src/mastra/config/lance.ts
src/mastra/config/mongodb.ts
Switch Google Gemini model bindings to use chat-based APIs for text and image models.
  • Replace google.languageModel calls with google.chat for all Gemini chat/text models.
  • Switch the Gemini 2.5 Flash Image binding from google.languageModel to google.chat while leaving Imagen bindings on google.image.
  • Leave a commented placeholder for a generic gemini model alias.
src/mastra/config/google.ts
Improve type safety and configuration of Mastra workflows and research agent runtime context.
  • Introduce a ResearchPhase union type and use it for the researchPhase field in ResearchRuntimeContext instead of a looser string union plus string.
  • Extend the Mastra initialization config to include typed providerOptions for Google and OpenAI, importing OpenAIResponsesProviderOptions from @ai-sdk/openai.
  • Use GoogleGenerativeAIProviderOptions and OpenAIResponsesProviderOptions types to annotate providerOptions for better type checking.
src/mastra/agents/researchAgent.ts
src/mastra/index.ts
Bump AI SDK and related dependencies and add a new provider package.
  • Increment versions of @ai-sdk/google, @ai-sdk/google-vertex, @ai-sdk/openai, @ai-sdk/openai-compatible, and @ai-sdk/react to their latest minor/patch releases.
  • Add ai-sdk-provider-opencode-sdk as a new dependency.
  • Update prettier to ^3.8.0 in devDependencies.
  • Regenerate package-lock.json to reflect new dependency versions and additions.
package.json
package-lock.json

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link

coderabbitai bot commented Jan 15, 2026

Caution

Review failed

The pull request is closed.

Summary by CodeRabbit

  • Chores

    • Updated AI SDK package dependencies to latest patch versions.
    • Added new AI SDK provider dependency.
    • Updated development tooling dependencies.
  • Refactor

    • Restructured AI model configuration for improved organization.
    • Updated memory management templates and database naming conventions.
    • Enhanced debug logging support across model configurations.

✏️ Tip: You can customize this high-level summary in your review settings.

Walkthrough

This PR updates package dependencies to newer versions, refines type definitions for research phases, restructures model configurations across Gemini and Google providers, introduces a new convexMemory export, and updates default collection/table names from 'governed\_rag' to 'mastra\_vectors'. Additionally, provider options are wired into route configuration.

Changes

Cohort / File(s) Summary
Dependency Updates
package.json
Bumps AI SDK packages to newer patch versions; adds ai-sdk-provider-opencode-sdk dependency; updates prettier from ^3.7.4 to ^3.8.0.
Type Refinements
src/mastra/agents/researchAgent.ts
Introduces ResearchPhase union type; narrows ResearchRuntimeContext.researchPhase to use the new type instead of allowing arbitrary strings.
Memory & Vector Configuration
src/mastra/config/convex.ts
Exports new convexMemory instance with configured storage, vector, embedder, and memory management options; renames internal constants from storage/vector to storageCon/vectorCon.
Default Collection/Table Names
src/mastra/config/lance.ts, src/mastra/config/mongodb.ts
Changes default collection/table name from 'governed\_rag' to 'mastra\_vectors'; updates Lance embedding dimension from 1536 to 3072; simplifies Lance working memory template.
Model Configuration Restructuring
src/mastra/config/gemini-cli.ts
Nests thinkingBudget and showThoughts into thinkingConfig objects across Gemini models; adds verbose: true and logger: log to multiple models.
Provider Method Replacement
src/mastra/config/google.ts
Replaces google.languageModel() calls with google.chat() across all Google Chat model mappings (gemini3Pro, gemini3Flash, gemini25Pro, gemini25Flash, gemini25FlashLite, gemini25ComputerUse, gemini25FlashAlt, gemini25FlashImage).
Route Provider Options
src/mastra/index.ts
Adds type-only imports for GoogleGenerativeAIProviderOptions and OpenAIResponsesProviderOptions; wires providerOptions into chatRoute defaultOptions.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Poem

🐰 Models now chat where language once stood,

Memory configured, collections renamed—all good!

Thinking wrapped neat in config delight,

Type safety tightened, the future burns bright! ✨

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch develop


📜 Recent review details

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 53a7f34 and 66ccb59.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (9)
  • package.json
  • src/mastra/agents/researchAgent.ts
  • src/mastra/config/convex.ts
  • src/mastra/config/gemini-cli.ts
  • src/mastra/config/google.ts
  • src/mastra/config/lance.ts
  • src/mastra/config/mongodb.ts
  • src/mastra/config/opencode.ts
  • src/mastra/index.ts

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ssdeanx, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily focuses on updating and refining the AI infrastructure by bumping key AI-related dependencies and standardizing configurations across various AI models and memory stores. The changes aim to improve model interaction, enhance memory management capabilities, and bolster type safety within the AI agent's runtime context, ensuring a more robust and maintainable system.

Highlights

  • Dependency Updates: Updated @ai-sdk packages to their latest versions, bumped prettier, and introduced a new dependency: ai-sdk-provider-opencode-sdk.
  • AI Model Configuration Enhancements: Refactored Gemini CLI model configurations to use a nested thinkingConfig object, added verbose logging, and integrated a custom logger. All Google model calls were switched from languageModel to the chat API, including image models, indicating a shift to chat-centric interactions.
  • Memory Management Standardization: Standardized the default collection/table names for LanceDB and MongoDB to mastra_vectors for consistency. The Lance embedding dimension was updated to 3072, aligning with the gemini-embedding-001 model, and its working memory template was significantly simplified.
  • New Convex Memory Export: A new convexMemory instance is now exported, configured with ModelRouterEmbeddingModel for embeddings, and detailed semanticRecall and workingMemory options, enhancing memory capabilities for Convex-based storage.
  • Type Safety Improvements: Introduced a new ResearchPhase type ('initial' | 'followup' | 'validation') to make the researchPhase field in ResearchRuntimeContext type-safe, improving code reliability.
  • Typed Provider Options: Added typed providerOptions for Google and OpenAI within the mastra initialization, allowing for more structured and type-safe configuration of AI providers.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link

🤖 I'm sorry @ssdeanx, but I was unable to process your request. Please see the logs for more details.

@ssdeanx ssdeanx merged commit 46b5c1e into main Jan 15, 2026
19 of 25 checks passed
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request includes several important updates across the AI SDK dependencies and configurations. Key changes include bumping @ai-sdk packages and prettier, and adding the ai-sdk-provider-opencode-sdk. The ResearchPhase type has been introduced for improved type safety in researchAgent.ts. The convexMemory configuration has been significantly expanded with detailed options for message management, semantic recall, and working memory. The Gemini CLI configurations now utilize thinkingConfig and include verbose/logger options for better control and debugging. All Google model calls have been switched from languageModel to chat, including for image models, aligning with updated API usage. Furthermore, the default table/collection names for LanceDB and MongoDB have been updated to 'mastra_vectors', and Lance's embedding dimension is now 3072. Finally, typed providerOptions for Google and OpenAI have been added to the chatRoute in mastra/index.ts, enhancing type safety and configurability for AI providers.

// Additional variants
gemini25FlashAlt: google.languageModel('gemini-2.5-flash-preview-09-2025'),
gemini25FlashAlt: google.chat('gemini-2.5-flash-preview-09-2025'),
//gemini: ('google/gemini-2.5-flash-preview-09-2025'),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This commented-out line appears to be dead code. Please remove it to keep the codebase clean.

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 1 issue, and left some high level feedback:

  • In convexMemory the comment still refers to PgVector and a flat index even though the backend is ConvexVector; consider updating or removing these copy-pasted comments (and the partially commented-out indexConfig/indexName block) to avoid confusion about the actual vector store and index configuration being used.
  • The providerOptions object in mastra is currently initialized as empty objects cast to GoogleGenerativeAIProviderOptions and OpenAIResponsesProviderOptions; it would be more type-safe to either make these options truly optional or construct minimal valid configs instead of relying on type assertions with {}.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `convexMemory` the comment still refers to `PgVector` and a flat index even though the backend is `ConvexVector`; consider updating or removing these copy-pasted comments (and the partially commented-out `indexConfig`/`indexName` block) to avoid confusion about the actual vector store and index configuration being used.
- The `providerOptions` object in `mastra` is currently initialized as empty objects cast to `GoogleGenerativeAIProviderOptions` and `OpenAIResponsesProviderOptions`; it would be more type-safe to either make these options truly optional or construct minimal valid configs instead of relying on type assertions with `{}`.

## Individual Comments

### Comment 1
<location> `src/mastra/config/google.ts:42` </location>
<code_context>
 export const googleImageModels = {
     // Gemini Flash Image model
-    gemini25FlashImage: google.languageModel('gemini-2.5-flash-image'),
+    gemini25FlashImage: google.chat('gemini-2.5-flash-image'),
    // gemini3ProImage: google('gemini-3-pro-image-preview'),
     // Imagen 4.0 models
</code_context>

<issue_to_address>
**issue (bug_risk):** Using `google.chat` for an image-oriented model ID is likely the wrong factory and may break image generation.

This model ID appears to be for image generation, but it’s now created via `google.chat` instead of `google.image`. That mismatch is likely to cause runtime or capability errors when generating images. Please switch to the appropriate image factory, e.g. `google.image('gemini-2.5-flash-image')`, or whichever factory the SDK specifies for this model.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

export const googleImageModels = {
// Gemini Flash Image model
gemini25FlashImage: google.languageModel('gemini-2.5-flash-image'),
gemini25FlashImage: google.chat('gemini-2.5-flash-image'),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Using google.chat for an image-oriented model ID is likely the wrong factory and may break image generation.

This model ID appears to be for image generation, but it’s now created via google.chat instead of google.image. That mismatch is likely to cause runtime or capability errors when generating images. Please switch to the appropriate image factory, e.g. google.image('gemini-2.5-flash-image'), or whichever factory the SDK specifies for this model.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates AI SDK dependencies and makes configuration improvements across multiple files. The changes focus on standardizing vector storage naming conventions, improving type safety, updating AI model configurations, and exporting a new memory instance for Convex.

Changes:

  • Bumped @AI-SDK packages (google, google-vertex, openai, openai-compatible, react) and prettier to latest versions; added ai-sdk-provider-opencode-sdk dependency
  • Exported convexMemory with ModelRouterEmbeddingModel and semantic/working memory configuration
  • Updated default collection/table names to 'mastra_vectors' for MongoDB and Lance; changed Lance embedding dimension from 1536 to 3072 and simplified working memory template
  • Refactored Gemini CLI config to use nested thinkingConfig object and added verbose/logger options
  • Switched Google model API calls from languageModel() to chat() method
  • Added ResearchPhase type and typed providerOptions for Google/OpenAI in mastra initialization

Reviewed changes

Copilot reviewed 8 out of 10 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
package.json Bumped @AI-SDK packages and prettier; added ai-sdk-provider-opencode-sdk
package-lock.json Updated lockfile with new dependency versions and integrity hashes
src/mastra/config/mongodb.ts Changed default collection name to 'mastra_vectors'
src/mastra/config/lance.ts Updated table name to 'mastra_vectors', embedding dimension to 3072, simplified working memory template
src/mastra/config/convex.ts Added convexMemory export with full semantic and working memory configuration
src/mastra/config/gemini-cli.ts Restructured thinking options into thinkingConfig object; added verbose/logger options; changed import from logError to log
src/mastra/config/google.ts Changed all model methods from languageModel() to chat()
src/mastra/agents/researchAgent.ts Added ResearchPhase type for type-safe researchPhase field
src/mastra/index.ts Added typed providerOptions for google and openai providers
src/mastra/config/opencode.ts Empty file (new dependency added but not implemented)


export const convexMemory = new Memory({
storage: storageCon,
vector: vectorCon, // Using PgVector with flat for 3072 dimension embeddings (gemini-embedding-001)
Copy link

Copilot AI Jan 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment incorrectly refers to "PgVector" when this configuration is using ConvexVector (as defined on line 11). Update the comment to accurately reflect the vector store being used.

Suggested change
vector: vectorCon, // Using PgVector with flat for 3072 dimension embeddings (gemini-embedding-001)
vector: vectorCon, // Using ConvexVector for 3072 dimension embeddings (gemini-embedding-001)

Copilot uses AI. Check for mistakes.
import { createGeminiProvider } from 'ai-sdk-provider-gemini-cli'
import { logError } from './logger'
import os from 'os'
import { log } from './logger'
Copy link

Copilot AI Jan 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The import was changed from 'logError' to 'log', but the commented line 13 still references 'os.homedir()' while the 'os' import was removed. If the cacheDir option is intended to be used in the future, the 'os' import should be retained (even if commented), or the comment should be updated to remove the os.homedir() reference.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants