fairseq icon indicating copy to clipboard operation
fairseq copied to clipboard

About the way to simplify MMS inference

Open LYPinASR opened this issue 2 years ago • 0 comments

🚀 Feature Request

At present, models such as W2V2 can be called directly with the following command: model_path = "/ path/to/your/wav2vec/model" model = Wav2VecModel.from_pretrained(model_path, checkpoint_file='checkpoint_best.pt') I also want to call MMS in the same way and do speech recognition directly.

Motivation

The MMS home page provides an inference command, but that's through indirect calls to examples/speech_recognition/new/infer. I'd like to simplify this step. For example, after calling the model, you can type in a speech and then get the corresponding text directly, just like a demo.

Look forward to it.

LYPinASR avatar May 26 '23 11:05 LYPinASR