Skip to content

Can't export a model #18121

@gggekov

Description

@gggekov

🐛 Describe the bug

The mms-tts-eng model from HuggingFace(https://huggingface.co/facebook/mms-tts-eng) doesn't export. The script reproducing the error is:

from transformers import VitsModel, AutoTokenizer
import torch

model = VitsModel.from_pretrained("facebook/mms-tts-eng")
model = model.eval()
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-eng")

text = "some example text in the English language"
inputs = tokenizer(text, return_tensors="pt")

print(inputs,inputs['input_ids'].shape,inputs['input_ids'].dtype,inputs['attention_mask'].shape,inputs['attention_mask'].dtype)

example_inputs = (inputs['input_ids'],inputs['attention_mask'])
exported_program = torch.export.export(model, example_inputs)

and the error is

item: "Sym(Eq(u2, 1))" = torch.ops.aten.item.default(ne);  ne = item = None
.....
torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression Eq(u2, 1) (unhinted: Eq(u2, 1)).  (Size-like symbols: none)

consider using data-dependent friendly APIs such as guard_or_false, guard_or_true and statically_known_true.
Caused by: (_export/non_strict_utils.py:1159 in __torch_function__)
...

I'm using ExecuTorch installed from source in editable mode and transformers 5.0.0.rc1.

It would be great if there was a way to export the model as it's an interesting use case, the NN is already trained in PyTorch and available in HuggingFace. I would like to deploy the model with ExecuTorch.
Is there a way to export the mms-tts-eng model ?

Versions

Environment information:

PyTorch version: 2.11.0.dev20260215
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: macOS 15.7.4 (arm64)
GCC version: Could not collect
Clang version: 17.0.0 (clang-1700.6.4.2)
CMake version: version 3.31.6
Libc version: N/A

Python version: 3.10.16 (main, Dec  3 2024, 17:27:57) [Clang 16.0.0 (clang-1600.0.26.4)] (64-bit runtime)
Python platform: macOS-15.7.4-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Caching allocator config: N/A

CPU:
Apple M3 Pro

Versions of relevant libraries:
[pip3] executorch==1.2.0a0+096f10c
[pip3] flake8==6.1.0
[pip3] flake8-breakpoint==1.1.0
[pip3] flake8-bugbear==24.4.26
[pip3] flake8-comprehensions==3.14.0
[pip3] flake8-plugin-utils==1.3.3
[pip3] flake8-pyi==23.5.0
[pip3] mypy==1.14.1
[pip3] mypy_extensions==1.1.0
[pip3] numpy==2.2.6
[pip3] onnxruntime==1.23.2
[pip3] optimum-executorch==0.2.0.dev0
[pip3] pytorch_tokenizers==1.1.0
[pip3] torch==2.11.0.dev20260215
[pip3] torchao==0.16.0+git026b76d12
[pip3] torchaudio==2.11.0.dev20260215
[pip3] torchcodec==0.10.0.dev20251222
[pip3] torchdata==0.11.0
[pip3] torcheval==0.0.7
[pip3] torchsr==1.0.4
[pip3] torchtune==0.0.0
[pip3] torchvision==0.26.0.dev20260215
[conda] Could not collect

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions