SenseVoice icon indicating copy to clipboard operation
SenseVoice copied to clipboard

demo_onnx.py运行报错

Open Sekri0 opened this issue 1 year ago • 5 comments

Notice: In order to resolve issues more efficiently, please raise issue following the template. (注意:为了更加高效率解决您遇到的问题,请按照模板提问,补充细节)

🐛 Bug

运行demo_onnx.py报错

demo_onnx.py: image

报错信息: Traceback (most recent call last): File "/workspace/data/wcy/code/SenseVoice/demo_onnx.py", line 13, in model = SenseVoiceSmall(model_dir, batch_size=1, quantize=True) File "/root/miniforge3/envs/FunAudioLLM/lib/python3.10/site-packages/funasr_onnx/sensevoice_bin.py", line 71, in init model_dir = model.export(type="onnx", quantize=quantize, **kwargs) File "/root/miniforge3/envs/FunAudioLLM/lib/python3.10/site-packages/funasr/auto/auto_model.py", line 664, in export export_dir = export_utils.export(model=model, data_in=data_list, **kwargs) File "/root/miniforge3/envs/FunAudioLLM/lib/python3.10/site-packages/funasr/utils/export_utils.py", line 24, in export _onnx( File "/root/miniforge3/envs/FunAudioLLM/lib/python3.10/site-packages/funasr/utils/export_utils.py", line 80, in _onnx torch.onnx.export( File "/root/miniforge3/envs/FunAudioLLM/lib/python3.10/site-packages/torch/onnx/utils.py", line 516, in export _export( File "/root/miniforge3/envs/FunAudioLLM/lib/python3.10/site-packages/torch/onnx/utils.py", line 1612, in _export graph, params_dict, torch_out = _model_to_graph( File "/root/miniforge3/envs/FunAudioLLM/lib/python3.10/site-packages/torch/onnx/utils.py", line 1134, in _model_to_graph graph, params, torch_out, module = _create_jit_graph(model, args) File "/root/miniforge3/envs/FunAudioLLM/lib/python3.10/site-packages/torch/onnx/utils.py", line 1010, in _create_jit_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) File "/root/miniforge3/envs/FunAudioLLM/lib/python3.10/site-packages/torch/onnx/utils.py", line 914, in _trace_and_get_graph_from_model trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph( File "/root/miniforge3/envs/FunAudioLLM/lib/python3.10/site-packages/torch/jit/_trace.py", line 1310, in _get_trace_graph outs = ONNXTracedModule( File "/root/miniforge3/envs/FunAudioLLM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/root/miniforge3/envs/FunAudioLLM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/root/miniforge3/envs/FunAudioLLM/lib/python3.10/site-packages/torch/jit/_trace.py", line 138, in forward graph, out = torch._C._create_graph_by_tracing( File "/root/miniforge3/envs/FunAudioLLM/lib/python3.10/site-packages/torch/jit/_trace.py", line 129, in wrapper outs.append(self.inner(*trace_inputs)) File "/root/miniforge3/envs/FunAudioLLM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/root/miniforge3/envs/FunAudioLLM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/root/miniforge3/envs/FunAudioLLM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1522, in _slow_forward result = self.forward(*input, **kwargs) TypeError: export_forward() missing 2 required positional arguments: 'language' and 'textnorm'

To Reproduce

Steps to reproduce the behavior (always include the command you ran):

  1. Run cmd '....'
  2. See error

Code sample

Expected behavior

Environment

  • OS (e.g., Linux): Linux
  • FunASR Version (e.g., 1.0.0): 1.2.0
  • ModelScope Version (e.g., 1.11.0): 1.21.0
  • PyTorch Version (e.g., 2.0.0): 2.3.0
  • How you installed funasr (pip, source): pip install -r requirements + pip3 install -U funasr funasr-onnx
  • Python version: 3.10.16
  • GPU (e.g., V100M32): A100-PCIE-40GB
  • CUDA/cuDNN version (e.g., cuda11.7): 12.2
  • Docker version (e.g., funasr-runtime-sdk-cpu-0.4.1)
  • Any other relevant information:

Additional context

Sekri0 avatar Dec 25 '24 02:12 Sekri0

我用最新的master安装测试没问题(昨天直接用官网的pip install funasr_onnx也ok) 您这边的FunASR Version (e.g., 1.0.0) 升级一下版本测试下是不是就好了?

majic31 avatar Dec 25 '24 02:12 majic31

我补充了详细的环境信息。我目前装的FunASR 1.2.0似乎已经是最新的,报错问题仍然存在

Sekri0 avatar Dec 25 '24 02:12 Sekri0

onnx                              1.17.0
torch                             2.4.0
funasr                            1.2.1

demo_onnx.py运行正常,可以导出模型

slin000111 avatar Dec 25 '24 02:12 slin000111

我补充了详细的环境信息。我目前装的FunASR 1.2.0似乎已经是最新的,报错问题仍然存在

你测试的是funasr onnx,所以需要更新funasr_onnx版本。我这边最新的funasr_onnx版本是0.4.1 最新的也可以用源码安装,参考这个试一下:https://github.com/modelscope/FunASR/blob/main/runtime/python/onnxruntime/README.md (or install from source code 那里)

majic31 avatar Dec 25 '24 02:12 majic31

从源码安装最新的funasr 1.2.1后报错消失,非常感谢

Sekri0 avatar Dec 25 '24 06:12 Sekri0