ivan provalov
ivan provalov
Setting the `mixed_precision=True` seems to have a positive effect - the loss is changing from `epoch 0` to `epoch 1`, but then, after the `epoch 2` the training crashes with...
Added some logging to the exception to print the tensor with NaN value (`mixed_precision=True`): `loss:tensor(nan, device='cuda:0', grad_fn=)`
@lexkoro thank you!
When I compare a healthy (green) and unhealthy (orange) model lr starting point rates, they are the same. I think the log prints an incomplete value, which is just a...
> Is the issue "not decreasing" or "getting NaN"? I'm confused. I am seeing both issues: 1. not decreasing avg_loss with `mixed_precision=False` 2. NaN loss with `mixed_precision=True`. I encountered the...
> > @thorstenMueller reported a similar problem regarding the learning rate recently on matrix with tacotron2 ddc. > > Just to post the solution to my `lr` problem. @erogol helped...
I noticed that the training loss is increasing, causing the best model to remain the same, thus causing the eval loss remain constant:  training, same model, same parameters (removed `lr_scheduler_params={"warmup_steps": 1}`), 3921 samples: ![Screen Shot 2022-07-24...
> @iprovalo I've tried LJSpeech recipe with GlowTTS and could not replicate "constant loss" issue. It might be about the dataset. @erogol are you using these public datasets which you...
This example is using a microphone with AudioWorklet interface of the Web Audio API to communicate with vosk server via a websocket.