Skip to content

Conversation

@JuanmanDev
Copy link

Added support for multiples llms, specially for local with ollama.

  • Added LLMFactory for provider-agnostic LLM creation
  • Supports OpenAI, Ollama (local/free), Anthropic, Google, Groq, Azure, Together, HuggingFace, OpenRouter
  • Updated memory system to be provider-agnostic
  • Fixed Ollama integration with tool calling support (llama3.2, llama3.1, mistral-nemo, qwen2.5)
  • Added comprehensive documentation and examples
  • Updated CLI with new Ollama model selections
image image image image image image image image image

…Groq, and more

- Added LLMFactory for provider-agnostic LLM creation
- Supports OpenAI, Ollama (local/free), Anthropic, Google, Groq, Azure, Together, HuggingFace, OpenRouter
- Updated memory system to be provider-agnostic
- Fixed Ollama integration with tool calling support (llama3.2, llama3.1, mistral-nemo, qwen2.5)
- Added comprehensive documentation and examples
- Updated CLI with new Ollama model selections
- 100% backward compatible - OpenAI remains default
- Verified working with tests
Copilot AI review requested due to automatic review settings October 31, 2025 18:23
@gemini-code-assist
Copy link

Summary of Changes

Hello @JuanmanDev, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the project's flexibility and cost-efficiency by introducing comprehensive multi-provider LLM support. It abstracts away provider-specific implementations through a new factory pattern, allowing users to seamlessly switch between various AI models, including free local options like Ollama, and optimize for cost, performance, or privacy. The update also includes a refactored memory system and extensive documentation to guide users through the new capabilities, all while ensuring existing functionalities remain intact.

Highlights

  • Multi-Provider LLM Support: The project now supports a wide array of LLM providers including OpenAI, Ollama (local), Anthropic, Google, Groq, Azure, Together, HuggingFace, and OpenRouter, moving beyond a single-vendor dependency.
  • LLM Factory Implementation: A new LLMFactory module has been introduced to provide a unified, provider-agnostic interface for creating and managing LLM instances, simplifying integration and switching between models.
  • Provider-Agnostic Memory System: The financial situation memory system has been refactored to be provider-agnostic, specifically addressing embedding generation for local models like Ollama without requiring OpenAI API keys.
  • Enhanced CLI and Configuration: The command-line interface (CLI) has been updated with new Ollama model selections, and the default configuration now includes flexible settings for choosing LLM providers, models, and parameters.
  • Comprehensive Documentation and Examples: Extensive new documentation and examples have been added, including a detailed LLM Provider Guide, Quick Start guides, and migration instructions to help users leverage the new multi-provider capabilities.
  • Backward Compatibility: All changes maintain 100% backward compatibility, meaning existing code configured for OpenAI will continue to function without modification.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This is an excellent pull request that significantly enhances the project by introducing multi-provider LLM support. The addition of the LLMFactory provides a clean, maintainable, and extensible way to handle different LLM providers. The extensive documentation, examples, and configuration updates are very well done and will make it easy for users to leverage this new flexibility. I've identified a couple of high-severity issues in the new example and test files where an incorrect model is used, which would cause them to fail. I also have a medium-severity suggestion to improve the robustness and readability of the FinancialSituationMemory class. Overall, this is a fantastic contribution.

Comment on lines +28 to +29
config["deep_think_llm"] = "llama3" # Use llama3 for complex reasoning
config["quick_think_llm"] = "llama3" # Use llama3 for quick tasks

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The example is configured to use llama3, which does not support the tool-calling/function-calling capabilities required by TradingAgents. This will cause the example to fail. Please use a model that is verified to support tool calling, such as llama3.2, as mentioned in your documentation.

Suggested change
config["deep_think_llm"] = "llama3" # Use llama3 for complex reasoning
config["quick_think_llm"] = "llama3" # Use llama3 for quick tasks
config["deep_think_llm"] = "llama3.2" # Use llama3.2 for tool calling support
config["quick_think_llm"] = "llama3.2" # Use llama3.2 for tool calling support

Comment on lines +47 to +48
ollama_config["deep_think_llm"] = "llama3" # Using available model
ollama_config["quick_think_llm"] = "llama3" # Using available model

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This test is configured to use the llama3 model, which lacks support for the tool-calling features essential for TradingAgents to function correctly. This will lead to test failures. Please update the test to use a model with tool-calling support, like llama3.2.

Suggested change
ollama_config["deep_think_llm"] = "llama3" # Using available model
ollama_config["quick_think_llm"] = "llama3" # Using available model
ollama_config["deep_think_llm"] = "llama3.2" # Using a model that supports tool calling
ollama_config["quick_think_llm"] = "llama3.2" # Using a model that supports tool calling

Comment on lines +9 to +30
self.provider = config.get("llm_provider", "openai").lower()

# Determine embedding model based on provider
if self.provider == "ollama":
self.embedding = "nomic-embed-text"
self.use_ollama_embeddings = True
elif config.get("backend_url") == "http://localhost:11434/v1":
self.embedding = "nomic-embed-text"
self.use_ollama_embeddings = True
else:
self.embedding = "text-embedding-3-small"
self.client = OpenAI(base_url=config["backend_url"])
self.use_ollama_embeddings = False

# Only create OpenAI client if we're using OpenAI embeddings
if not self.use_ollama_embeddings:
api_key = os.getenv("OPENAI_API_KEY")
if api_key:
self.client = OpenAI(base_url=config.get("backend_url"), api_key=api_key)
else:
self.client = OpenAI(base_url=config.get("backend_url"))
else:
self.client = None

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic in the __init__ method for determining the embedding strategy and initializing the client can be simplified for better robustness and readability.

  1. The check for the Ollama provider should rely only on config.get("llm_provider") instead of also checking for a hardcoded backend_url. This makes the implementation more robust against different user configurations for Ollama.
  2. The OpenAI client initialization can be simplified. The openai library automatically reads the OPENAI_API_KEY from the environment if it's not provided, so explicitly checking for it and passing it is redundant.
        self.provider = config.get("llm_provider", "openai").lower()

        # Determine embedding model based on provider
        if self.provider == "ollama":
            self.embedding = "nomic-embed-text"
            self.use_ollama_embeddings = True
        else:
            self.embedding = "text-embedding-3-small"
            self.use_ollama_embeddings = False

        # Only create OpenAI client if we're using OpenAI embeddings
        if not self.use_ollama_embeddings:
            # The OpenAI client automatically uses the OPENAI_API_KEY env var if set.
            self.client = OpenAI(base_url=config.get("backend_url"))
        else:
            self.client = None

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds comprehensive multi-provider LLM support to TradingAgents, enabling the framework to work with 9+ AI providers beyond OpenAI. The changes introduce a factory pattern for provider-agnostic LLM initialization while maintaining 100% backward compatibility.

Key changes:

  • New LLM factory module providing unified interface for OpenAI, Ollama, Anthropic, Google, Groq, Azure, Together AI, HuggingFace, and OpenRouter
  • Refactored initialization logic in trading graph to use factory pattern
  • Updated memory module to support provider-specific embedding strategies
  • Enhanced CLI with provider-specific model recommendations

Reviewed Changes

Copilot reviewed 29 out of 30 changed files in this pull request and generated 16 comments.

Show a summary per file
File Description
tradingagents/llm_factory.py New factory pattern implementation for creating provider-agnostic LLM instances
tradingagents/graph/trading_graph.py Replaced hardcoded provider logic with factory-based initialization
tradingagents/agents/utils/memory.py Made embedding logic provider-agnostic with fallback to ChromaDB defaults
tradingagents/default_config.py Added configuration options for multiple providers with examples
cli/utils.py Updated model selection menus with Ollama-specific recommendations
requirements.txt Reorganized dependencies by provider with clear documentation
Multiple test/example files Added comprehensive testing and example scripts for multi-provider usage
Multiple documentation files Created extensive guides for setup, migration, and provider-specific configuration
Comments suppressed due to low confidence (1)

README.md:1

  • The models 'gpt-4.1-nano' do not exist in OpenAI's model lineup. These should use valid model identifiers such as 'gpt-4o-mini' or other documented OpenAI models.
<p align="center">

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +82 to +83
if base_url:
params["base_url"] = base_url
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing API key parameter handling. The factory should accept and pass through the api_key parameter to allow explicit key configuration instead of relying solely on environment variables. Consider adding api_key: Optional[str] = None to the method signature and including it in params if provided.

Copilot uses AI. Check for mistakes.
Comment on lines +15 to +17
elif config.get("backend_url") == "http://localhost:11434/v1":
self.embedding = "nomic-embed-text"
self.use_ollama_embeddings = True
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardcoded URL check for Ollama detection is fragile. This will fail if the Ollama URL is configured differently (e.g., 'http://localhost:11434' without '/v1' as shown in line 258 of cli/utils.py). The logic should rely solely on the provider setting from line 9, not URL string matching.

Suggested change
elif config.get("backend_url") == "http://localhost:11434/v1":
self.embedding = "nomic-embed-text"
self.use_ollama_embeddings = True

Copilot uses AI. Check for mistakes.
("Google", "https://generativelanguage.googleapis.com/v1"),
("Openrouter", "https://openrouter.ai/api/v1"),
("Ollama", "http://localhost:11434/v1"),
("Ollama", "http://localhost:11434"),
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent Ollama URL format. The URL 'http://localhost:11434' is missing the '/v1' suffix that's used in memory.py line 15 for Ollama detection. This inconsistency will cause the memory module to not recognize Ollama when selected through CLI, breaking provider-agnostic functionality.

Suggested change
("Ollama", "http://localhost:11434"),
("Ollama", "http://localhost:11434/v1"),

Copilot uses AI. Check for mistakes.
)

self.propagator = Propagator()
self.propagator = Propagator(max_recur_limit=200)
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change from default max_recur_limit to hardcoded value of 200 appears unrelated to multi-provider support. This should either be extracted to a configuration parameter or explained in the PR description as it changes system behavior.

Suggested change
self.propagator = Propagator(max_recur_limit=200)
self.propagator = Propagator(
max_recur_limit=self.config.get("max_recur_limit", 200)
)

Copilot uses AI. Check for mistakes.
print("Testing Unsupported Provider Handling...\n")

try:
llm = LLMFactory.create_llm(
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Variable llm is not used.

Suggested change
llm = LLMFactory.create_llm(
LLMFactory.create_llm(

Copilot uses AI. Check for mistakes.
Comment on lines +106 to +118
graph = TradingAgentsGraph(config=DEFAULT_CONFIG)

# Option 2: Use Ollama (local)
ollama_config = {**DEFAULT_CONFIG, **OLLAMA_CONFIG}
graph = TradingAgentsGraph(config=ollama_config)

# Option 3: Use Anthropic Claude
anthropic_config = {**DEFAULT_CONFIG, **ANTHROPIC_CONFIG}
graph = TradingAgentsGraph(config=anthropic_config)

# Option 4: Use Google Gemini
google_config = {**DEFAULT_CONFIG, **GOOGLE_CONFIG}
graph = TradingAgentsGraph(config=google_config)
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This assignment to 'graph' is unnecessary as it is redefined before this value is used.

Suggested change
graph = TradingAgentsGraph(config=DEFAULT_CONFIG)
# Option 2: Use Ollama (local)
ollama_config = {**DEFAULT_CONFIG, **OLLAMA_CONFIG}
graph = TradingAgentsGraph(config=ollama_config)
# Option 3: Use Anthropic Claude
anthropic_config = {**DEFAULT_CONFIG, **ANTHROPIC_CONFIG}
graph = TradingAgentsGraph(config=anthropic_config)
# Option 4: Use Google Gemini
google_config = {**DEFAULT_CONFIG, **GOOGLE_CONFIG}
graph = TradingAgentsGraph(config=google_config)
# graph = TradingAgentsGraph(config=DEFAULT_CONFIG)
# Option 2: Use Ollama (local)
ollama_config = {**DEFAULT_CONFIG, **OLLAMA_CONFIG}
# graph = TradingAgentsGraph(config=ollama_config)
# Option 3: Use Anthropic Claude
anthropic_config = {**DEFAULT_CONFIG, **ANTHROPIC_CONFIG}
# graph = TradingAgentsGraph(config=anthropic_config)
# Option 4: Use Google Gemini
google_config = {**DEFAULT_CONFIG, **GOOGLE_CONFIG}
# graph = TradingAgentsGraph(config=google_config)

Copilot uses AI. Check for mistakes.
Comment on lines +106 to +118
graph = TradingAgentsGraph(config=DEFAULT_CONFIG)

# Option 2: Use Ollama (local)
ollama_config = {**DEFAULT_CONFIG, **OLLAMA_CONFIG}
graph = TradingAgentsGraph(config=ollama_config)

# Option 3: Use Anthropic Claude
anthropic_config = {**DEFAULT_CONFIG, **ANTHROPIC_CONFIG}
graph = TradingAgentsGraph(config=anthropic_config)

# Option 4: Use Google Gemini
google_config = {**DEFAULT_CONFIG, **GOOGLE_CONFIG}
graph = TradingAgentsGraph(config=google_config)
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This assignment to 'graph' is unnecessary as it is redefined before this value is used.

Suggested change
graph = TradingAgentsGraph(config=DEFAULT_CONFIG)
# Option 2: Use Ollama (local)
ollama_config = {**DEFAULT_CONFIG, **OLLAMA_CONFIG}
graph = TradingAgentsGraph(config=ollama_config)
# Option 3: Use Anthropic Claude
anthropic_config = {**DEFAULT_CONFIG, **ANTHROPIC_CONFIG}
graph = TradingAgentsGraph(config=anthropic_config)
# Option 4: Use Google Gemini
google_config = {**DEFAULT_CONFIG, **GOOGLE_CONFIG}
graph = TradingAgentsGraph(config=google_config)
# graph = TradingAgentsGraph(config=DEFAULT_CONFIG)
# Option 2: Use Ollama (local)
ollama_config = {**DEFAULT_CONFIG, **OLLAMA_CONFIG}
# graph = TradingAgentsGraph(config=ollama_config)
# Option 3: Use Anthropic Claude
anthropic_config = {**DEFAULT_CONFIG, **ANTHROPIC_CONFIG}
# graph = TradingAgentsGraph(config=anthropic_config)
# Option 4: Use Google Gemini
google_config = {**DEFAULT_CONFIG, **GOOGLE_CONFIG}
# graph = TradingAgentsGraph(config=google_config)

Copilot uses AI. Check for mistakes.
Comment on lines +106 to +118
graph = TradingAgentsGraph(config=DEFAULT_CONFIG)

# Option 2: Use Ollama (local)
ollama_config = {**DEFAULT_CONFIG, **OLLAMA_CONFIG}
graph = TradingAgentsGraph(config=ollama_config)

# Option 3: Use Anthropic Claude
anthropic_config = {**DEFAULT_CONFIG, **ANTHROPIC_CONFIG}
graph = TradingAgentsGraph(config=anthropic_config)

# Option 4: Use Google Gemini
google_config = {**DEFAULT_CONFIG, **GOOGLE_CONFIG}
graph = TradingAgentsGraph(config=google_config)
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This assignment to 'graph' is unnecessary as it is redefined before this value is used.

Suggested change
graph = TradingAgentsGraph(config=DEFAULT_CONFIG)
# Option 2: Use Ollama (local)
ollama_config = {**DEFAULT_CONFIG, **OLLAMA_CONFIG}
graph = TradingAgentsGraph(config=ollama_config)
# Option 3: Use Anthropic Claude
anthropic_config = {**DEFAULT_CONFIG, **ANTHROPIC_CONFIG}
graph = TradingAgentsGraph(config=anthropic_config)
# Option 4: Use Google Gemini
google_config = {**DEFAULT_CONFIG, **GOOGLE_CONFIG}
graph = TradingAgentsGraph(config=google_config)
# graph = TradingAgentsGraph(config=DEFAULT_CONFIG)
# Option 2: Use Ollama (local)
ollama_config = {**DEFAULT_CONFIG, **OLLAMA_CONFIG}
# graph = TradingAgentsGraph(config=ollama_config)
# Option 3: Use Anthropic Claude
anthropic_config = {**DEFAULT_CONFIG, **ANTHROPIC_CONFIG}
# graph = TradingAgentsGraph(config=anthropic_config)
# Option 4: Use Google Gemini
google_config = {**DEFAULT_CONFIG, **GOOGLE_CONFIG}
# graph = TradingAgentsGraph(config=google_config)

Copilot uses AI. Check for mistakes.
# Test 1: Import the factory
print("Test 1: Importing LLM Factory...")
try:
from tradingagents.llm_factory import LLMFactory, get_llm_instance
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'LLMFactory' is not used.

Suggested change
from tradingagents.llm_factory import LLMFactory, get_llm_instance
from tradingagents.llm_factory import get_llm_instance

Copilot uses AI. Check for mistakes.
# Test 4: Check if langchain-community is installed
print("Test 4: Checking for langchain-community package...")
try:
from langchain_community.chat_models import ChatOllama
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'ChatOllama' is not used.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant