Fp16 inference, forward got NaN
Notice: In order to resolve issues more efficiently, please raise issue following the template. (注意:为了更加高效率解决您遇到的问题,请按照模板提问,补充细节)
🐛 Bug
使用官方示例代码推理,当使用Fp16的时候,前向出现Nan
To Reproduce
Steps to reproduce the behavior (always include the command you ran):
推理代码: `from funasr import AutoModel import torch
with torch.cuda.amp.autocast(): model = AutoModel(model="paraformer-zh" # spk_model="cam++", ) res = model.generate(input=f"test.wav", batch_size_s=300, hotword='魔搭') print(res)`
前向推理在sanm/encoder.py的forward会出现Nan,如果使用Fp32则无问题
Code sample
Expected behavior
Environment
- OS (e.g., Linux):
- FunASR Version (e.g., 1.0.0):
- ModelScope Version (e.g., 1.11.0):
- PyTorch Version (e.g., 2.0.0):
- How you installed funasr (
pip, source): - Python version:
- GPU (e.g., V100M32)
- CUDA/cuDNN version (e.g., cuda11.7):
- Docker version (e.g., funasr-runtime-sdk-cpu-0.4.1)
- Any other relevant information:
Additional context
Pls has this issue been resolved?
How is the FP16 model trained? Can I save the FP16 model as a normal model after training?
我也遇到了这个问题,请问解决了吗
Ongoing updating model.
On Fri, 23 Aug 2024 at 3:13 PM, prz30 @.***> wrote:
我也遇到了这个问题,请问解决了吗
— Reply to this email directly, view it on GitHub https://github.com/modelscope/FunASR/issues/1889#issuecomment-2306445405, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABKVLRJK6IQDXEATINO6GQDZS3OK3AVCNFSM6AAAAABKQIYOYCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMBWGQ2DKNBQGU . You are receiving this because you are subscribed to this thread.Message ID: @.***>
这个解决了吗,感觉都好久了