Feature hasn't been suggested before.
Describe the enhancement you want to request
When connecting OpenCode to LM Studio, it currently fetches a hardcoded list of models from models.dev instead of querying the local LM Studio server's /v1/models endpoint.
Other tools like the "Continue" VS Code extension handle this by using an AUTODETECT model value (or similar logic) to dynamically query available models on the provider side.
This means users have to manually update their config every time they switch or download a new model in LM Studio.
Steps to Reproduce:
- Install and start LM Studio with Local Server enabled (default port 1234).
- Load any model in LM Studio.
- Connect OpenCode to the lmstudio provider via /connect.
- Run /models inside OpenCode.
Current Behavior:
OpenCode shows a hardcoded list from models.dev (e.g., gpt-oss, qwen3) instead of the models actually loaded in LM Studio.
Expected Behavior:
OpenCode should query http://127.0.0.1:1234/v1/models and display the available local models dynamically, similar to how it works for Ollama or when using "AUTODETECT" logic.
Feature hasn't been suggested before.
Describe the enhancement you want to request
When connecting OpenCode to LM Studio, it currently fetches a hardcoded list of models from models.dev instead of querying the local LM Studio server's /v1/models endpoint.
Other tools like the "Continue" VS Code extension handle this by using an AUTODETECT model value (or similar logic) to dynamically query available models on the provider side.
This means users have to manually update their config every time they switch or download a new model in LM Studio.
Steps to Reproduce:
Current Behavior:
OpenCode shows a hardcoded list from models.dev (e.g., gpt-oss, qwen3) instead of the models actually loaded in LM Studio.
Expected Behavior:
OpenCode should query http://127.0.0.1:1234/v1/models and display the available local models dynamically, similar to how it works for Ollama or when using "AUTODETECT" logic.