-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Open
Description
Repro steps:
- Go to https://huggingface.co/HauhauCS/Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive
- Click "use this model" -> colab
- Run code on colab
Generated code is:
from llama_cpp import Llama
llm = Llama.from_pretrained(
repo_id="HauhauCS/Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive",
filename="Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-BF16.gguf",
)
which fails with:
ValueError Traceback (most recent call last)
[/tmp/ipykernel_1917/696730726.py](https://localhost:8080/#) in <cell line: 0>()
3 from llama_cpp import Llama
4
----> 5 llm = Llama.from_pretrained(
6 repo_id="HauhauCS/Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive",
7 filename="Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-BF16.gguf",
2 frames
[/usr/local/lib/python3.12/dist-packages/llama_cpp/_internals.py](https://localhost:8080/#) in __init__(self, path_model, params, verbose)
56
57 if model is None:
---> 58 raise ValueError(f"Failed to load model from file: {path_model}")
59
60 vocab = llama_cpp.llama_model_get_vocab(model)
ValueError: Failed to load model from file: /root/.cache/huggingface/hub/models--HauhauCS--Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive/snapshots/53367faad177ee6a23601983cdac4308b51393df/./Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-BF16.gguf
This error message is not actionable as it doesn't say why it failed to load so the user can't do much about it.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels