Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 19 additions & 1 deletion .env.example
Original file line number Diff line number Diff line change
@@ -1,2 +1,20 @@
# API Keys - Set the ones you need based on your LLM provider choice
OPENAI_API_KEY=openai_api_key_placeholder
# ANTHROPIC_API_KEY=your-anthropic-api-key-here
# GOOGLE_API_KEY=your-google-api-key-here
# GROQ_API_KEY=your-groq-api-key-here
# TOGETHER_API_KEY=your-together-api-key-here
# AZURE_OPENAI_API_KEY=your-azure-key-here
# AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/

# Data API Keys
ALPHA_VANTAGE_API_KEY=alpha_vantage_api_key_placeholder
OPENAI_API_KEY=openai_api_key_placeholder
REDDIT_CLIENT_ID=your-reddit-client-id
REDDIT_CLIENT_SECRET=your-reddit-client-secret
REDDIT_USER_AGENT=TradingAgents

# Optional: For local Ollama instance
# OLLAMA_BASE_URL=http://localhost:11434

# Optional: Results directory
# TRADINGAGENTS_RESULTS_DIR=./results
334 changes: 334 additions & 0 deletions CHANGES_SUMMARY.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,334 @@
# TradingAgents - AI Provider Agnostic Update - Summary

## Overview

Your TradingAgents project has been successfully updated to be **AI provider agnostic**. You can now use OpenAI, Ollama, Anthropic, Google, Groq, and many other providers instead of being locked to OpenAI only.

## Key Changes Made

### 1. Core Infrastructure

#### New LLM Factory (`tradingagents/llm_factory.py`)
- Unified interface for creating LLM instances from any provider
- Automatic handling of provider-specific initialization
- Supports 9+ providers out of the box
- Clear error messages for missing dependencies

#### Updated Configuration (`tradingagents/default_config.py`)
- Added `llm_provider` setting (default: "openai")
- Added `temperature` for model control
- Added `llm_kwargs` for provider-specific parameters
- Includes example configurations for all providers

### 2. Code Refactoring

#### `tradingagents/graph/trading_graph.py`
- Removed hardcoded provider checks
- Uses LLM factory for initialization
- Cleaner, more maintainable code

#### Type Annotations Updated
- `tradingagents/graph/setup.py`
- `tradingagents/graph/signal_processing.py`
- `tradingagents/graph/reflection.py`
- Now accept any LangChain-compatible LLM (not just ChatOpenAI)

### 3. Dependencies

#### `requirements.txt`
- Organized by purpose with comments
- Includes langchain-core and langchain-community
- Optional packages documented for each provider

#### `.env.example`
- Added API key placeholders for all providers
- Documented Ollama setup (no API key needed)

## New Documentation

### Comprehensive Guides

1. **`docs/LLM_PROVIDER_GUIDE.md`** (Main Reference)
- Complete setup for each provider
- Environment variables needed
- Required packages
- Model recommendations by use case
- Troubleshooting section

2. **`docs/MULTI_PROVIDER_SUPPORT.md`** (Quick Start)
- Quick code examples
- Installation notes
- Environment setup

3. **`docs/MIGRATION_GUIDE.md`** (For Existing Users)
- What changed and why
- Migration steps
- Benefits of multi-provider support
- Breaking changes (none!)

4. **`docs/README_ADDITION.md`** (README Enhancement)
- Suggested additions to main README
- Quick examples for each provider

### Example Configurations

5. **`examples/llm_provider_configs.py`**
- Pre-configured settings for all providers
- Ready-to-use code snippets
- Usage examples

## Supported Providers

| Provider | Type | Cost | Setup Difficulty | Best For |
|----------|------|------|------------------|----------|
| **OpenAI** | Cloud API | $$$ | Easy | Quality & Reliability |
| **Ollama** | Local | FREE | Medium | Privacy & Cost Savings |
| **Anthropic** | Cloud API | $$$ | Easy | Quality & Long Context |
| **Google Gemini** | Cloud API | $$ | Easy | Cost-Effective Quality |
| **Groq** | Cloud API | $ | Easy | Speed |
| **OpenRouter** | Cloud API | Varies | Easy | Multi-Provider Access |
| **Azure OpenAI** | Cloud API | $$$ | Medium | Enterprise |
| **Together AI** | Cloud API | $ | Easy | Open Source Models |
| **HuggingFace** | Cloud API | Varies | Easy | Model Variety |

## Quick Start Guide

### Current Setup (OpenAI) - No Changes Needed

```python
from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG

# Your existing code still works!
ta = TradingAgentsGraph(config=DEFAULT_CONFIG)
```

### Switch to Ollama (Free, Local)

**1. Install Ollama:**
```bash
# Visit https://ollama.ai and install
ollama pull llama3:70b
ollama pull llama3:8b
```

**2. Install Package:**
```bash
pip install langchain-community
```

**3. Update Code:**
```python
from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG

config = DEFAULT_CONFIG.copy()
config["llm_provider"] = "ollama"
config["deep_think_llm"] = "llama3:70b"
config["quick_think_llm"] = "llama3:8b"
config["backend_url"] = "http://localhost:11434"

ta = TradingAgentsGraph(config=config)
_, decision = ta.propagate("NVDA", "2024-05-10")
```

### Switch to Anthropic Claude

**1. Get API Key:**
```bash
export ANTHROPIC_API_KEY=sk-ant-your-key-here
```

**2. Install Package:**
```bash
pip install langchain-anthropic # Already in requirements.txt
```

**3. Update Code:**
```python
config = DEFAULT_CONFIG.copy()
config["llm_provider"] = "anthropic"
config["deep_think_llm"] = "claude-3-opus-20240229"
config["quick_think_llm"] = "claude-3-haiku-20240307"

ta = TradingAgentsGraph(config=config)
```

### Switch to Google Gemini

**1. Get API Key:**
```bash
export GOOGLE_API_KEY=your-google-key-here
```

**2. Install Package:**
```bash
pip install langchain-google-genai # Already in requirements.txt
```

**3. Update Code:**
```python
config = DEFAULT_CONFIG.copy()
config["llm_provider"] = "google"
config["deep_think_llm"] = "gemini-1.5-pro"
config["quick_think_llm"] = "gemini-1.5-flash"

ta = TradingAgentsGraph(config=config)
```

### Switch to Groq (Fast & Affordable)

**1. Get API Key:**
```bash
export GROQ_API_KEY=gsk-your-groq-key
```

**2. Install Package:**
```bash
pip install langchain-groq
```

**3. Update Code:**
```python
config = DEFAULT_CONFIG.copy()
config["llm_provider"] = "groq"
config["deep_think_llm"] = "mixtral-8x7b-32768"
config["quick_think_llm"] = "llama3-8b-8192"

ta = TradingAgentsGraph(config=config)
```

## Benefits

### 💰 Cost Savings
- **Free:** Run Llama 3 locally with Ollama ($0/month)
- **Cheap:** Use Groq or Google Gemini ($10-20/month)
- **Flexible:** Mix providers based on task complexity

### 🔒 Privacy
- Run models completely locally with Ollama
- No data sent to external APIs
- Full control over your trading data

### ⚡ Performance
- Groq: Ultra-fast inference (500+ tokens/sec)
- Ollama: No API latency
- Choose the best tool for each job

### 🎯 Flexibility
- Not vendor-locked
- Switch providers in seconds
- Test multiple models easily

## Model Recommendations

### Best Quality
- **Deep Think:** GPT-4o or Claude 3 Opus
- **Quick Think:** GPT-4o-mini or Claude 3 Haiku

### Best Cost (Free)
- **Deep Think:** Llama 3 70B (Ollama)
- **Quick Think:** Llama 3 8B (Ollama)

### Best Speed
- **Deep Think:** Mixtral 8x7B (Groq)
- **Quick Think:** Llama 3 8B (Groq)

### Best Balance
- **Deep Think:** Gemini 1.5 Pro or Claude 3 Sonnet
- **Quick Think:** Gemini 1.5 Flash or Claude 3 Haiku

## Files Modified

### Core Files
- ✅ `tradingagents/llm_factory.py` (NEW)
- ✅ `tradingagents/default_config.py`
- ✅ `tradingagents/graph/trading_graph.py`
- ✅ `tradingagents/graph/setup.py`
- ✅ `tradingagents/graph/signal_processing.py`
- ✅ `tradingagents/graph/reflection.py`
- ✅ `requirements.txt`
- ✅ `.env.example`

### Documentation (NEW)
- ✅ `docs/LLM_PROVIDER_GUIDE.md`
- ✅ `docs/MULTI_PROVIDER_SUPPORT.md`
- ✅ `docs/MIGRATION_GUIDE.md`
- ✅ `docs/README_ADDITION.md`

### Examples (NEW)
- ✅ `examples/llm_provider_configs.py`

## Testing

Test the changes with:

```python
# Test with OpenAI (should work as before)
from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG

ta = TradingAgentsGraph(config=DEFAULT_CONFIG, debug=True)
_, decision = ta.propagate("AAPL", "2024-05-10")
print(f"Decision: {decision}")

# Test LLM factory directly
from tradingagents.llm_factory import LLMFactory

llm = LLMFactory.create_llm(
provider="openai",
model="gpt-4o-mini",
temperature=0.7
)
response = llm.invoke("What is 2+2?")
print(response.content)
```

## Next Steps

1. **Review Documentation:**
- Read `docs/LLM_PROVIDER_GUIDE.md` for detailed setup
- Check `examples/llm_provider_configs.py` for ready-to-use configs

2. **Try Ollama (Free):**
- Install Ollama from https://ollama.ai
- Pull a model: `ollama pull llama3`
- Update your config to use Ollama
- Save money while maintaining quality!

3. **Experiment:**
- Test different providers for different tasks
- Compare quality vs. cost vs. speed
- Find the optimal setup for your use case

4. **Update README (Optional):**
- Add the content from `docs/README_ADDITION.md` to your main README
- Let users know about multi-provider support

## Backward Compatibility

✅ **100% Backward Compatible**
- Existing code continues to work
- Default configuration still uses OpenAI
- No breaking changes

## Support

If you encounter issues:

1. Check `docs/LLM_PROVIDER_GUIDE.md` for setup instructions
2. Verify API keys are set correctly
3. Ensure required packages are installed
4. For Ollama, make sure it's running (`ollama serve`)

## Conclusion

Your TradingAgents project is now **provider-agnostic** and supports multiple AI providers! You have the flexibility to:

- Use free local models (Ollama)
- Choose the best provider for each task
- Optimize for cost, speed, or quality
- Maintain privacy with local models
- Future-proof against vendor changes

All while maintaining **100% backward compatibility** with existing code. 🎉
Loading