Skip to content

Improve error messages #2145

@mausch

Description

@mausch

Repro steps:

  1. Go to https://huggingface.co/HauhauCS/Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive
  2. Click "use this model" -> colab
  3. Run code on colab

Generated code is:

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="HauhauCS/Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive",
	filename="Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-BF16.gguf",
)

which fails with:

ValueError                                Traceback (most recent call last)

[/tmp/ipykernel_1917/696730726.py](https://localhost:8080/#) in <cell line: 0>()
      3 from llama_cpp import Llama
      4 
----> 5 llm = Llama.from_pretrained(
      6         repo_id="HauhauCS/Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive",
      7         filename="Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-BF16.gguf",

2 frames

[/usr/local/lib/python3.12/dist-packages/llama_cpp/_internals.py](https://localhost:8080/#) in __init__(self, path_model, params, verbose)
     56 
     57         if model is None:
---> 58             raise ValueError(f"Failed to load model from file: {path_model}")
     59 
     60         vocab = llama_cpp.llama_model_get_vocab(model)

ValueError: Failed to load model from file: /root/.cache/huggingface/hub/models--HauhauCS--Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive/snapshots/53367faad177ee6a23601983cdac4308b51393df/./Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-BF16.gguf

This error message is not actionable as it doesn't say why it failed to load so the user can't do much about it.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions