XUJiahua
XUJiahua
check here: https://gitee.com/jscode/go-package-plantuml ``` mkdir -p $GOPATH/src/git.oschina.net/jscode && cd $GOPATH/src/git.oschina.net/jscode git clone https://gitee.com/jscode/go-package-plantuml.git ```
TODO: split alerts
我是 MacOS。 sherpa-onnx-streaming-zipformer-ctc-multi-zh-hans-2023-12-13.tar.bz2 这个模型可以。 paraformer int8 也是一样的问题。 ``` ./bin/sherpa-onnx \ --provider=coreml \ --tokens=./sherpa-onnx-streaming-paraformer-bilingual-zh-en/tokens.txt \ --paraformer-encoder=./sherpa-onnx-streaming-paraformer-bilingual-zh-en/encoder.int8.onnx \ --paraformer-decoder=./sherpa-onnx-streaming-paraformer-bilingual-zh-en/decoder.int8.onnx \ ./sherpa-onnx-streaming-paraformer-bilingual-zh-en/test_wavs/0.wav OnlineRecognizerConfig(feat_config=FeatureExtractorConfig(sampling_rate=16000, feature_dim=80, low_freq=20, high_freq=-400, dither=0), model_config=OnlineModelConfig(transducer=OnlineTransducerModelConfig(encoder="", decoder="", joiner=""), paraformer=OnlineParaformerModelConfig(encoder="./sherpa-onnx-streaming-paraformer-bilingual-zh-en/encoder.int8.onnx", decoder="./sherpa-onnx-streaming-paraformer-bilingual-zh-en/decoder.int8.onnx"), wenet_ctc=OnlineWenetCtcModelConfig(model="",...
谢谢及时反馈! 可能就是 onnx runtime 的 coreml provider 有问题。我找到另一个使用 onnx runtime 推理 paraformer 模型的例子,遇到一样的问题。 https://github.com/RapidAI/RapidASR/blob/main/cpp_onnx/readme.md
我试试,有导出脚本可参考么,我看需要将原 pytorch 模型拆成 encoder, decoder 后分别导出。
I successfully built it on Windows. However, I don't have an AMD GPU on my Windows machine, so I need your assistance to verify if it actually works. As master...