elisonlau
elisonlau
> Check [[link]](https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer.decode.example). @sooftware Thanks so much for your immediately reply. But further more whether you have the latest the recognize.py that fit for the latest fairseq library/code such as:...
> > Check [[link]](https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer.decode.example). > > @sooftware Thanks so much for your immediately reply. But further more whether you have the latest the recognize.py that fit for the latest fairseq...
> @elisonlau please consider checking the official torchaudio backend for wav2vec2 based models, IIRC it well supports the ckpts from fairseq there as you can see in this tutorial: https://pytorch.org/audio/stable/tutorials/speech_recognition_pipeline_tutorial.html...