LYPinASR

Results 13 issues of LYPinASR

### Feature request When I follow the example of long-form transcription for whisper-large with Korean, the result is English. But after finetuning the whisper-large model with some Korean data, the...

## ❓ Questions and Help How to randomly initialize the parameters of the last few layers of the pre-trained model, or copy the parameters of the n-th layer to the...

question
needs triage

## 🚀 Feature Request At present, models such as W2V2 can be called directly with the following command: model_path = "/ path/to/your/wav2vec/model" model = Wav2VecModel.from_pretrained(model_path, checkpoint_file='checkpoint_best.pt') I also want to...

enhancement
help wanted
needs triage

### System Info - `transformers` version: 4.28.1 - Platform: Linux-4.15.0-20-generic-x86_64-with-glibc2.10 - Python version: 3.8.0 - Huggingface_hub version: 0.13.4 - Safetensors version: not installed - PyTorch version (GPU?): 1.12.1 (True) -...

Does FHVAE have the code of PyTorch or tensorflow-gpu 2.X ?

Hello, in my res.res, there was no "wer", why?

Hello! Iam working on finetuning HuBERT_base_ls_960.pt with 1h data. But the result is worse than yours in ILS-SSL paper. What are your parameters for the 1h data? Thank you!

### 🚀 The feature Hello! I want to use the finetuned HuBERT_base model. However, in torchaudio.pipelines, there has only HUBERT_ASR_LARGE and HUBERT_ASR_XLARGE. What should I do to get a HUBERT_ASR_BASE...

triaged

How about developing audio examples with MAML or Reptile for W2V2, Whisper, MMS and so on?

Hello, after finetuning whisper followed the blog "https://huggingface.co/blog/fine-tune-whisper", I find that the parameters of decoder do not appear to have been updated. Why?