Summary
Clarify external model provider support as three separate surfaces and scope this issue to the Copilot CLI runtime provider selection/configuration surface.
Why this clarification is needed
There is recurring confusion between:
- VS Code extension APIs for language model provider registration,
- Chat participant/agent APIs, and
- Copilot CLI runtime provider selection.
These are related in ecosystem discussions, but they are not the same API surface and should not be conflated in implementation planning.
Surface separation (explicit)
A) VS Code extension surface (provider registration)
registerLanguageModelChatProvider
- proposal gate:
enabledApiProposals: ["chatProvider"]
This is extension-host API surface for registering model/chat providers in VS Code.
B) Chat participant/agent surface (separate concern)
chatParticipants
createChatParticipant
This is participant/agent orchestration UX surface. It is not the same as provider registration.
C) Copilot CLI surface (this issue)
- Runtime provider selection and configuration for CLI execution.
- Not VS Code extension registration.
- Focus: deterministic resolution of provider/model/auth settings in CLI (
flag > env > config > default).
Copilot CLI proposal (in scope for this issue)
1) Deterministic precedence
CLI flag > environment variables > config file > built-in default
Resolution per setting:
- If explicit CLI flag is set, use it.
- Else if env var is set, use it.
- Else if config key is set, use it.
- Else use default.
2) CLI surface
Flags
--model-provider <provider-id>
--model <model-id>
--provider-endpoint <https://...>
--provider-api-key-env <ENV_VAR_NAME>
--provider-profile <profile-name>
Environment variables
COPILOT_MODEL_PROVIDER
COPILOT_MODEL
COPILOT_PROVIDER_ENDPOINT
COPILOT_PROVIDER_API_KEY
COPILOT_PROVIDER_PROFILE
Config (~/.config/copilot/config.json)
{
"model": "gpt-4.1",
"provider": {
"default": "github",
"profiles": {
"github": { "type": "github" },
"ollama-local": {
"type": "ollama",
"endpoint": "http://127.0.0.1:11434",
"model": "gemma3:latest"
},
"azure-byok": {
"type": "openai-compatible",
"endpoint": "https://example.openai.azure.com/openai/v1",
"apiKeyEnv": "AZURE_OPENAI_API_KEY",
"model": "gpt-4o"
}
}
}
}
3) /model compatibility
- Existing
/model behavior remains.
- Provider-aware syntax:
<provider>:<model>.
- If provider omitted, use active provider.
4) Security constraints
- No raw API keys in config.
- Env-based secret references in config (
apiKeyEnv).
- Redact secrets in logs/telemetry/errors.
- Validate endpoint schemes (
https, localhost exception for local providers).
- Keep provider secrets process-local.
Acceptance criteria
- Explicit documentation that VS Code provider registration APIs and chat participant APIs are out of scope for this CLI issue.
- Deterministic precedence (
flag > env > config > default).
- Works with GitHub default, OpenAI-compatible BYOK, and local Ollama.
- Clear validation/misconfiguration errors.
- Secrets redacted; no plaintext persistence.
Non-goals
- Implementing VS Code extension registration APIs in Copilot CLI.
- Implementing chat participant/agent registration in Copilot CLI.
- Full parity guarantees across all providers.
- Automatic model downloads/install.
Related issues (github/copilot-cli)
Summary
Clarify external model provider support as three separate surfaces and scope this issue to the Copilot CLI runtime provider selection/configuration surface.
Why this clarification is needed
There is recurring confusion between:
These are related in ecosystem discussions, but they are not the same API surface and should not be conflated in implementation planning.
Surface separation (explicit)
A) VS Code extension surface (provider registration)
registerLanguageModelChatProviderenabledApiProposals: ["chatProvider"]This is extension-host API surface for registering model/chat providers in VS Code.
B) Chat participant/agent surface (separate concern)
chatParticipantscreateChatParticipantThis is participant/agent orchestration UX surface. It is not the same as provider registration.
C) Copilot CLI surface (this issue)
flag > env > config > default).Copilot CLI proposal (in scope for this issue)
1) Deterministic precedence
CLI flag > environment variables > config file > built-in defaultResolution per setting:
2) CLI surface
Flags
--model-provider <provider-id>--model <model-id>--provider-endpoint <https://...>--provider-api-key-env <ENV_VAR_NAME>--provider-profile <profile-name>Environment variables
COPILOT_MODEL_PROVIDERCOPILOT_MODELCOPILOT_PROVIDER_ENDPOINTCOPILOT_PROVIDER_API_KEYCOPILOT_PROVIDER_PROFILEConfig (
~/.config/copilot/config.json){ "model": "gpt-4.1", "provider": { "default": "github", "profiles": { "github": { "type": "github" }, "ollama-local": { "type": "ollama", "endpoint": "http://127.0.0.1:11434", "model": "gemma3:latest" }, "azure-byok": { "type": "openai-compatible", "endpoint": "https://example.openai.azure.com/openai/v1", "apiKeyEnv": "AZURE_OPENAI_API_KEY", "model": "gpt-4o" } } } }3)
/modelcompatibility/modelbehavior remains.<provider>:<model>.4) Security constraints
apiKeyEnv).https, localhost exception for local providers).Acceptance criteria
flag > env > config > default).Non-goals
Related issues (github/copilot-cli)
geminiprovider type in Bring Your Own Model (BYOM) feature #2560 Supportgeminiprovider type in BYOM