doheeeeeee
doheeeeeee
## 🐛 Bug Hi, When I tried to load a hubert model, I got this error: ```bash Traceback (most recent call last): File "", line 1, in File "/home/work/workspace/fairseq_origin/fairseq/checkpoint_utils.py", line...
Hello, I have one question about batch norm statistic loss. Consider parallel training. I have 8 GPUs. and 1 gpu can bear 128 batch size. But you know, batch norm...
I trained vall-e on LibriTTS about 100 epochs (took almost 4 days on 8 A100 GPUs) and I obtained plausible synthesized audio. Here is a demo. [1] prompt : [prompt_link](https://drive.google.com/file/d/149pHqb6TZzVwhF1vRN50H8A4AEYShpfp/view?usp=share_link)...
Description: I am experiencing a discrepancy in training loss when using different GPU configurations for training the Zipformer model. Specifically, I observe different training loss patterns when training on a...
Hello. Thank you for great project. I trained zipformer-CTC streaming model using icefall toolkit and export onnx model using icefall code. I implemented zipformer-CTC streaming in multi-thread version. I have...
Hi. Thanks for nice project I'm trying to deploy zipformer transducer using sherpa and libtorch in CPU environment. I implemented torchscipt model is shared to all threads and the decoder...
Hello Next kaldi project team. Thanks for your great project. I trained zipformer-T with my own Korean dataset (over 50,000 hours). Since I need to serve on CPU instance, I...