Skip to content

[FEATURE]:LM Studio provider should auto-detect models via /v1/models API (similar to Continue extension) #23327

@lobanov-coder

Description

@lobanov-coder

Feature hasn't been suggested before.

  • I have verified this feature I'm about to request hasn't been suggested before.

Describe the enhancement you want to request

When connecting OpenCode to LM Studio, it currently fetches a hardcoded list of models from models.dev instead of querying the local LM Studio server's /v1/models endpoint.
Other tools like the "Continue" VS Code extension handle this by using an AUTODETECT model value (or similar logic) to dynamically query available models on the provider side.
This means users have to manually update their config every time they switch or download a new model in LM Studio.
Steps to Reproduce:

  1. Install and start LM Studio with Local Server enabled (default port 1234).
  2. Load any model in LM Studio.
  3. Connect OpenCode to the lmstudio provider via /connect.
  4. Run /models inside OpenCode.
    Current Behavior:
    OpenCode shows a hardcoded list from models.dev (e.g., gpt-oss, qwen3) instead of the models actually loaded in LM Studio.
    Expected Behavior:
    OpenCode should query http://127.0.0.1:1234/v1/models and display the available local models dynamically, similar to how it works for Ollama or when using "AUTODETECT" logic.

Metadata

Metadata

Assignees

No one assigned

    Labels

    discussionUsed for feature requests, proposals, ideas, etc. Open discussion

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions