LYPinASR
LYPinASR
I have solved the problem. Step 1: Upgrade transformers, unfixed. Step 2: Add option like "generate_kwargs = {"task":"transcribe", "language":""}", unfixed. Step 3: Add a line like "pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(language="ko", task="transcribe")",...
Thanks for your reply! I want to use the base model before and after finetuning to compare the output representation of each layer. As for the dataset, 10h is perpect....
How should I apply it to ASR? Thank you!
Thank you, Pran-Ker. Your anwser is used for the pretrained model. But my focus is the finetuned model. And ss far as I know, the finetuned model has no func...