- gpt-4: Latest GPT-4 model, best for complex reasoning
- gpt-4-turbo: Faster version with good performance
- gpt-4-32k: Extended context window (32,768 tokens)
- gpt-3.5-turbo: Fast and efficient for most tasks
- gpt-3.5-turbo-16k: Extended context window (16,384 tokens)
# Set GPT-4 as default
gh copilot config set model gpt-4
# Set GPT-3.5 for faster responses
gh copilot config set model gpt-3.5-turbo
# Use extended context version
gh copilot config set model gpt-4-32kexport GH_COPILOT_MODEL="gpt-4"Edit ~/.config/gh/copilot/config.yml:
default_model: gpt-4- Complex code architecture decisions
- Detailed code reviews
- Advanced debugging
- Learning new technologies
- Writing documentation
- Quick code snippets
- Simple explanations
- Basic debugging
- Faster iteration
- Testing ideas
# Check current model's context window
gh copilot info context
# View token usage for conversation
gh copilot usage --conversation| Model | Context Window | Best For |
|---|---|---|
| GPT-4 | 8,192 tokens | Standard tasks |
| GPT-4-32K | 32,768 tokens | Large codebases |
| GPT-3.5-Turbo | 4,096 tokens | Quick tasks |
| GPT-3.5-Turbo-16K | 16,384 tokens | Medium projects |
# Clear conversation to free tokens
gh copilot clear
# Start new focused conversation
gh copilot chat --new
# Include specific files in context
gh copilot ask "Explain this function" --include src/main.py# Use different model for one question
gh copilot ask "Complex algorithm question" --model gpt-4
# Continue with default model
gh copilot ask "Simple follow-up"# Change default model
gh copilot config set model gpt-3.5-turbo
# All subsequent commands use new model
gh copilot ask "This uses the new model"# Daily usage
gh copilot usage --today
# Weekly summary
gh copilot usage --week
# Model-specific usage
gh copilot usage --model gpt-4- Start with GPT-3.5 for exploration
- Switch to GPT-4 for complex problems
- Use extended context models sparingly
- Clear context when switching topics
- Monitor usage regularly
# Test response quality
gh copilot ask "Explain recursion" --model gpt-4
gh copilot ask "Explain recursion" --model gpt-3.5-turbo
# Compare context handling
gh copilot ask "Summarize this large file" --include large-file.py --model gpt-4-32k# Time response speed
time gh copilot ask "Quick Python function" --model gpt-3.5-turbo
time gh copilot ask "Quick Python function" --model gpt-4gh copilot config get modelgh copilot model test gpt-4
gh copilot model test gpt-3.5-turbogh copilot config reset model