Skip to content

Qwen3.5 models support #731

@eavelardev

Description

@eavelardev
docker model run hf.co/Qwen/Qwen3.5-0.8B

Error

> background model preload failed: preload failed: status=500 body=unable to load runner: error waiting for runner to be ready: vLLM terminated unexpectedly: vLLM failed: (APIServer pid=72) 

(APIServer pid=72) You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git` [type=value_error, input_value=ArgsKwargs((), {'model': ...rocessor_plugin': None}), input_type=ArgsKwargs]

(APIServer pid=72)     For further information visit https://errors.pydantic.dev/2.12/v/value_error
pydantic_core._pydantic_core.ValidationError: 1 validation error for ModelConfig

(APIServer pid=72)   Value error, The checkpoint you are trying to load has model type `qwen3_5` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.

I have the latest docker/model-runner:latest-vllm-cuda image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions