dynamic_shapes error when exporting Parakeet models to ONNX
I tried to export the Parakeet nemo model to ONNX files. My code is like
self.model.export(str(model_onnx_path))
It was working a while back but with the latest NeMo, I ran into the following error
[NeMo W 2025-11-07 05:06:37 nemo_logging:405] /usr/local/lib/python3.12/dist-packages/nemo/core/classes/exportable.py:264: UserWarning: # 'dynamic_axes' is not recommended when dynamo=True, and may lead to 'torch._dynamo.exc.UserError: Constraints violated.' Supply the 'dynamic_shapes' argument instead if export is unsuccessful.
After some debugging, I think there is some bug with the latest code. dynamic_axes is actually dynamic_shapes according to the following code
https://github.com/NVIDIA-NeMo/NeMo/blob/b6547c3c2f4eaf163764dc2726307a3aee89ee15/nemo/core/classes/exportable.py#L226
Thus, the line
https://github.com/NVIDIA-NeMo/NeMo/blob/b6547c3c2f4eaf163764dc2726307a3aee89ee15/nemo/core/classes/exportable.py#L272
should look like
if dynamic_axes is None:
dynamic_axes = self.dynamic_shapes_for_export(use_dynamo=True)
......
torch.onnx.export(
dynamic_shapes=dynamic_axes,
)
After changing the code, I was able to export the ONNX files as expected.
I also ran into the same error during export. The issue seems to be, that pytorch v2.9.0 integrated the dynamo export into the main export. The torch.onnx.dynamo_export has been removed and torch.onnx.export defaults to 'dynamo=True'. There need to be some changes to the Exportable class in NeMo to make use of the new torch export. For now, sticking to pytorch v2.8.0 worked for me.
Facing this issue.
Hi @TassiloHo
Thanks a lot — I was able to complete the conversion after downgrading PyTorch to 2.8.0.
I now see the following output:
[NeMo I 2025-12-05 11:35:10 exportable:135] Successfully exported ConformerEncoder to encoder-parakeet-tdt-0_6b.onnx
[NeMo I 2025-12-05 11:35:10 exportable:135] Successfully exported RNNTDecoderJoint to decoder_joint-parakeet-tdt-0_6b.onnx
Could you please help me with a simple Python script that performs transcription using only ONNX (no Torch, Transformers, or NeMo)?
Thanks in advance :)
Hey @AbhijithMallya
Inference using just ONNX requires an exported preprocessor, which needs a workaround as the complex STFT in the featurizer can not be exported to onnx. For an example export (with preprocessor) of an RNNT model and inference using just onnx you can check this: https://github.com/TassiloHo/OnnxNemoSTTInference