-
Notifications
You must be signed in to change notification settings - Fork 4.6k
feat: Add multi-provider LLM support with Ollama, Anthropic, Google, … #268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…Groq, and more - Added LLMFactory for provider-agnostic LLM creation - Supports OpenAI, Ollama (local/free), Anthropic, Google, Groq, Azure, Together, HuggingFace, OpenRouter - Updated memory system to be provider-agnostic - Fixed Ollama integration with tool calling support (llama3.2, llama3.1, mistral-nemo, qwen2.5) - Added comprehensive documentation and examples - Updated CLI with new Ollama model selections - 100% backward compatible - OpenAI remains default - Verified working with tests
Summary of ChangesHello @JuanmanDev, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the project's flexibility and cost-efficiency by introducing comprehensive multi-provider LLM support. It abstracts away provider-specific implementations through a new factory pattern, allowing users to seamlessly switch between various AI models, including free local options like Ollama, and optimize for cost, performance, or privacy. The update also includes a refactored memory system and extensive documentation to guide users through the new capabilities, all while ensuring existing functionalities remain intact. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This is an excellent pull request that significantly enhances the project by introducing multi-provider LLM support. The addition of the LLMFactory provides a clean, maintainable, and extensible way to handle different LLM providers. The extensive documentation, examples, and configuration updates are very well done and will make it easy for users to leverage this new flexibility. I've identified a couple of high-severity issues in the new example and test files where an incorrect model is used, which would cause them to fail. I also have a medium-severity suggestion to improve the robustness and readability of the FinancialSituationMemory class. Overall, this is a fantastic contribution.
| config["deep_think_llm"] = "llama3" # Use llama3 for complex reasoning | ||
| config["quick_think_llm"] = "llama3" # Use llama3 for quick tasks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The example is configured to use llama3, which does not support the tool-calling/function-calling capabilities required by TradingAgents. This will cause the example to fail. Please use a model that is verified to support tool calling, such as llama3.2, as mentioned in your documentation.
| config["deep_think_llm"] = "llama3" # Use llama3 for complex reasoning | |
| config["quick_think_llm"] = "llama3" # Use llama3 for quick tasks | |
| config["deep_think_llm"] = "llama3.2" # Use llama3.2 for tool calling support | |
| config["quick_think_llm"] = "llama3.2" # Use llama3.2 for tool calling support |
| ollama_config["deep_think_llm"] = "llama3" # Using available model | ||
| ollama_config["quick_think_llm"] = "llama3" # Using available model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is configured to use the llama3 model, which lacks support for the tool-calling features essential for TradingAgents to function correctly. This will lead to test failures. Please update the test to use a model with tool-calling support, like llama3.2.
| ollama_config["deep_think_llm"] = "llama3" # Using available model | |
| ollama_config["quick_think_llm"] = "llama3" # Using available model | |
| ollama_config["deep_think_llm"] = "llama3.2" # Using a model that supports tool calling | |
| ollama_config["quick_think_llm"] = "llama3.2" # Using a model that supports tool calling |
| self.provider = config.get("llm_provider", "openai").lower() | ||
|
|
||
| # Determine embedding model based on provider | ||
| if self.provider == "ollama": | ||
| self.embedding = "nomic-embed-text" | ||
| self.use_ollama_embeddings = True | ||
| elif config.get("backend_url") == "http://localhost:11434/v1": | ||
| self.embedding = "nomic-embed-text" | ||
| self.use_ollama_embeddings = True | ||
| else: | ||
| self.embedding = "text-embedding-3-small" | ||
| self.client = OpenAI(base_url=config["backend_url"]) | ||
| self.use_ollama_embeddings = False | ||
|
|
||
| # Only create OpenAI client if we're using OpenAI embeddings | ||
| if not self.use_ollama_embeddings: | ||
| api_key = os.getenv("OPENAI_API_KEY") | ||
| if api_key: | ||
| self.client = OpenAI(base_url=config.get("backend_url"), api_key=api_key) | ||
| else: | ||
| self.client = OpenAI(base_url=config.get("backend_url")) | ||
| else: | ||
| self.client = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic in the __init__ method for determining the embedding strategy and initializing the client can be simplified for better robustness and readability.
- The check for the Ollama provider should rely only on
config.get("llm_provider")instead of also checking for a hardcodedbackend_url. This makes the implementation more robust against different user configurations for Ollama. - The
OpenAIclient initialization can be simplified. Theopenailibrary automatically reads theOPENAI_API_KEYfrom the environment if it's not provided, so explicitly checking for it and passing it is redundant.
self.provider = config.get("llm_provider", "openai").lower()
# Determine embedding model based on provider
if self.provider == "ollama":
self.embedding = "nomic-embed-text"
self.use_ollama_embeddings = True
else:
self.embedding = "text-embedding-3-small"
self.use_ollama_embeddings = False
# Only create OpenAI client if we're using OpenAI embeddings
if not self.use_ollama_embeddings:
# The OpenAI client automatically uses the OPENAI_API_KEY env var if set.
self.client = OpenAI(base_url=config.get("backend_url"))
else:
self.client = NoneThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds comprehensive multi-provider LLM support to TradingAgents, enabling the framework to work with 9+ AI providers beyond OpenAI. The changes introduce a factory pattern for provider-agnostic LLM initialization while maintaining 100% backward compatibility.
Key changes:
- New LLM factory module providing unified interface for OpenAI, Ollama, Anthropic, Google, Groq, Azure, Together AI, HuggingFace, and OpenRouter
- Refactored initialization logic in trading graph to use factory pattern
- Updated memory module to support provider-specific embedding strategies
- Enhanced CLI with provider-specific model recommendations
Reviewed Changes
Copilot reviewed 29 out of 30 changed files in this pull request and generated 16 comments.
Show a summary per file
| File | Description |
|---|---|
| tradingagents/llm_factory.py | New factory pattern implementation for creating provider-agnostic LLM instances |
| tradingagents/graph/trading_graph.py | Replaced hardcoded provider logic with factory-based initialization |
| tradingagents/agents/utils/memory.py | Made embedding logic provider-agnostic with fallback to ChromaDB defaults |
| tradingagents/default_config.py | Added configuration options for multiple providers with examples |
| cli/utils.py | Updated model selection menus with Ollama-specific recommendations |
| requirements.txt | Reorganized dependencies by provider with clear documentation |
| Multiple test/example files | Added comprehensive testing and example scripts for multi-provider usage |
| Multiple documentation files | Created extensive guides for setup, migration, and provider-specific configuration |
Comments suppressed due to low confidence (1)
README.md:1
- The models 'gpt-4.1-nano' do not exist in OpenAI's model lineup. These should use valid model identifiers such as 'gpt-4o-mini' or other documented OpenAI models.
<p align="center">
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if base_url: | ||
| params["base_url"] = base_url |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing API key parameter handling. The factory should accept and pass through the api_key parameter to allow explicit key configuration instead of relying solely on environment variables. Consider adding api_key: Optional[str] = None to the method signature and including it in params if provided.
| elif config.get("backend_url") == "http://localhost:11434/v1": | ||
| self.embedding = "nomic-embed-text" | ||
| self.use_ollama_embeddings = True |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoded URL check for Ollama detection is fragile. This will fail if the Ollama URL is configured differently (e.g., 'http://localhost:11434' without '/v1' as shown in line 258 of cli/utils.py). The logic should rely solely on the provider setting from line 9, not URL string matching.
| elif config.get("backend_url") == "http://localhost:11434/v1": | |
| self.embedding = "nomic-embed-text" | |
| self.use_ollama_embeddings = True |
| ("Google", "https://generativelanguage.googleapis.com/v1"), | ||
| ("Openrouter", "https://openrouter.ai/api/v1"), | ||
| ("Ollama", "http://localhost:11434/v1"), | ||
| ("Ollama", "http://localhost:11434"), |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent Ollama URL format. The URL 'http://localhost:11434' is missing the '/v1' suffix that's used in memory.py line 15 for Ollama detection. This inconsistency will cause the memory module to not recognize Ollama when selected through CLI, breaking provider-agnostic functionality.
| ("Ollama", "http://localhost:11434"), | |
| ("Ollama", "http://localhost:11434/v1"), |
| ) | ||
|
|
||
| self.propagator = Propagator() | ||
| self.propagator = Propagator(max_recur_limit=200) |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The change from default max_recur_limit to hardcoded value of 200 appears unrelated to multi-provider support. This should either be extracted to a configuration parameter or explained in the PR description as it changes system behavior.
| self.propagator = Propagator(max_recur_limit=200) | |
| self.propagator = Propagator( | |
| max_recur_limit=self.config.get("max_recur_limit", 200) | |
| ) |
| print("Testing Unsupported Provider Handling...\n") | ||
|
|
||
| try: | ||
| llm = LLMFactory.create_llm( |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Variable llm is not used.
| llm = LLMFactory.create_llm( | |
| LLMFactory.create_llm( |
| graph = TradingAgentsGraph(config=DEFAULT_CONFIG) | ||
|
|
||
| # Option 2: Use Ollama (local) | ||
| ollama_config = {**DEFAULT_CONFIG, **OLLAMA_CONFIG} | ||
| graph = TradingAgentsGraph(config=ollama_config) | ||
|
|
||
| # Option 3: Use Anthropic Claude | ||
| anthropic_config = {**DEFAULT_CONFIG, **ANTHROPIC_CONFIG} | ||
| graph = TradingAgentsGraph(config=anthropic_config) | ||
|
|
||
| # Option 4: Use Google Gemini | ||
| google_config = {**DEFAULT_CONFIG, **GOOGLE_CONFIG} | ||
| graph = TradingAgentsGraph(config=google_config) |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This assignment to 'graph' is unnecessary as it is redefined before this value is used.
| graph = TradingAgentsGraph(config=DEFAULT_CONFIG) | |
| # Option 2: Use Ollama (local) | |
| ollama_config = {**DEFAULT_CONFIG, **OLLAMA_CONFIG} | |
| graph = TradingAgentsGraph(config=ollama_config) | |
| # Option 3: Use Anthropic Claude | |
| anthropic_config = {**DEFAULT_CONFIG, **ANTHROPIC_CONFIG} | |
| graph = TradingAgentsGraph(config=anthropic_config) | |
| # Option 4: Use Google Gemini | |
| google_config = {**DEFAULT_CONFIG, **GOOGLE_CONFIG} | |
| graph = TradingAgentsGraph(config=google_config) | |
| # graph = TradingAgentsGraph(config=DEFAULT_CONFIG) | |
| # Option 2: Use Ollama (local) | |
| ollama_config = {**DEFAULT_CONFIG, **OLLAMA_CONFIG} | |
| # graph = TradingAgentsGraph(config=ollama_config) | |
| # Option 3: Use Anthropic Claude | |
| anthropic_config = {**DEFAULT_CONFIG, **ANTHROPIC_CONFIG} | |
| # graph = TradingAgentsGraph(config=anthropic_config) | |
| # Option 4: Use Google Gemini | |
| google_config = {**DEFAULT_CONFIG, **GOOGLE_CONFIG} | |
| # graph = TradingAgentsGraph(config=google_config) |
| graph = TradingAgentsGraph(config=DEFAULT_CONFIG) | ||
|
|
||
| # Option 2: Use Ollama (local) | ||
| ollama_config = {**DEFAULT_CONFIG, **OLLAMA_CONFIG} | ||
| graph = TradingAgentsGraph(config=ollama_config) | ||
|
|
||
| # Option 3: Use Anthropic Claude | ||
| anthropic_config = {**DEFAULT_CONFIG, **ANTHROPIC_CONFIG} | ||
| graph = TradingAgentsGraph(config=anthropic_config) | ||
|
|
||
| # Option 4: Use Google Gemini | ||
| google_config = {**DEFAULT_CONFIG, **GOOGLE_CONFIG} | ||
| graph = TradingAgentsGraph(config=google_config) |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This assignment to 'graph' is unnecessary as it is redefined before this value is used.
| graph = TradingAgentsGraph(config=DEFAULT_CONFIG) | |
| # Option 2: Use Ollama (local) | |
| ollama_config = {**DEFAULT_CONFIG, **OLLAMA_CONFIG} | |
| graph = TradingAgentsGraph(config=ollama_config) | |
| # Option 3: Use Anthropic Claude | |
| anthropic_config = {**DEFAULT_CONFIG, **ANTHROPIC_CONFIG} | |
| graph = TradingAgentsGraph(config=anthropic_config) | |
| # Option 4: Use Google Gemini | |
| google_config = {**DEFAULT_CONFIG, **GOOGLE_CONFIG} | |
| graph = TradingAgentsGraph(config=google_config) | |
| # graph = TradingAgentsGraph(config=DEFAULT_CONFIG) | |
| # Option 2: Use Ollama (local) | |
| ollama_config = {**DEFAULT_CONFIG, **OLLAMA_CONFIG} | |
| # graph = TradingAgentsGraph(config=ollama_config) | |
| # Option 3: Use Anthropic Claude | |
| anthropic_config = {**DEFAULT_CONFIG, **ANTHROPIC_CONFIG} | |
| # graph = TradingAgentsGraph(config=anthropic_config) | |
| # Option 4: Use Google Gemini | |
| google_config = {**DEFAULT_CONFIG, **GOOGLE_CONFIG} | |
| # graph = TradingAgentsGraph(config=google_config) |
| graph = TradingAgentsGraph(config=DEFAULT_CONFIG) | ||
|
|
||
| # Option 2: Use Ollama (local) | ||
| ollama_config = {**DEFAULT_CONFIG, **OLLAMA_CONFIG} | ||
| graph = TradingAgentsGraph(config=ollama_config) | ||
|
|
||
| # Option 3: Use Anthropic Claude | ||
| anthropic_config = {**DEFAULT_CONFIG, **ANTHROPIC_CONFIG} | ||
| graph = TradingAgentsGraph(config=anthropic_config) | ||
|
|
||
| # Option 4: Use Google Gemini | ||
| google_config = {**DEFAULT_CONFIG, **GOOGLE_CONFIG} | ||
| graph = TradingAgentsGraph(config=google_config) |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This assignment to 'graph' is unnecessary as it is redefined before this value is used.
| graph = TradingAgentsGraph(config=DEFAULT_CONFIG) | |
| # Option 2: Use Ollama (local) | |
| ollama_config = {**DEFAULT_CONFIG, **OLLAMA_CONFIG} | |
| graph = TradingAgentsGraph(config=ollama_config) | |
| # Option 3: Use Anthropic Claude | |
| anthropic_config = {**DEFAULT_CONFIG, **ANTHROPIC_CONFIG} | |
| graph = TradingAgentsGraph(config=anthropic_config) | |
| # Option 4: Use Google Gemini | |
| google_config = {**DEFAULT_CONFIG, **GOOGLE_CONFIG} | |
| graph = TradingAgentsGraph(config=google_config) | |
| # graph = TradingAgentsGraph(config=DEFAULT_CONFIG) | |
| # Option 2: Use Ollama (local) | |
| ollama_config = {**DEFAULT_CONFIG, **OLLAMA_CONFIG} | |
| # graph = TradingAgentsGraph(config=ollama_config) | |
| # Option 3: Use Anthropic Claude | |
| anthropic_config = {**DEFAULT_CONFIG, **ANTHROPIC_CONFIG} | |
| # graph = TradingAgentsGraph(config=anthropic_config) | |
| # Option 4: Use Google Gemini | |
| google_config = {**DEFAULT_CONFIG, **GOOGLE_CONFIG} | |
| # graph = TradingAgentsGraph(config=google_config) |
| # Test 1: Import the factory | ||
| print("Test 1: Importing LLM Factory...") | ||
| try: | ||
| from tradingagents.llm_factory import LLMFactory, get_llm_instance |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Import of 'LLMFactory' is not used.
| from tradingagents.llm_factory import LLMFactory, get_llm_instance | |
| from tradingagents.llm_factory import get_llm_instance |
| # Test 4: Check if langchain-community is installed | ||
| print("Test 4: Checking for langchain-community package...") | ||
| try: | ||
| from langchain_community.chat_models import ChatOllama |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Import of 'ChatOllama' is not used.
Added support for multiples llms, specially for local with ollama.