Gasser Elbanna

Results 5 comments of Gasser Elbanna

Hello, thank you for the quick response. I used the default config file for pre-training. So, I am assuming these are the parameters below I need to adjust? Dynamic Batching...

Hi, thanks @TParcollet for the explanation, it's clearer now. Thanks @Adel-Moumen for pointing out the flag. I am currently pretraining with `--grad_accumulation_factor=2` and `max_batch_length=400` on 8 gpus yielding 2 *...

> BTW, are you using --precision=fp16 for the pre-training? I am using fp32 now.

Hi @TParcollet @Adel-Moumen, I am just following up with an issue posted [here](https://github.com/speechbrain/speechbrain/issues/2588) related to training Wav2Vec 2.0 with multiple GPUs using torchrun. I was wondering if you have any...

Hi @TParcollet, > Is the screenshot that you posted the screen of the freeze? Yes and then the code crashes later. > Can you see the GPUs being utilised? Yes,...