I got BERTencoder problem in train_second.py phase
I got this error when i run the second training please urgent help
@thermal:~/StyleTTS2$ python train_second.py --config_path ./Configs/config.yml Loading the first stage model at /home/shaima/StyleTTS2/Models/LJSpeech/first_stage.pth ... decoder loaded text_encoder loaded style_encoder loaded text_aligner loaded pitch_extractor loaded
max_lr: 2e-05 max_momentum: 0.95 maximize: False min_lr: 0 weight_decay: 0.01 ) decoder AdamW ( Parameter Group 0 amsgrad: False base_momentum: 0.85 betas: (0.0, 0.99) capturable: False differentiable: False eps: 1e-09 foreach: None fused: None initial_lr: 1e-05 lr: 1e-05 max_lr: 2e-05 max_momentum: 0.95 maximize: False min_lr: 0 weight_decay: 0.0001 )
/home/shaima/StyleTTS2/train_second.py(459)main() 457 set_trace() 458 --> 459 optimizer.step('bert_encoder') 460 optimizer.step('bert') 461 optimizer.step('predictor')
ipdb>
I got the same error - did you manage to fix it? I mean it's happening because the g_loss is nan, but I'm not sure why that's happening. Commenting out the set_trace() line would raise a different error, or you could try reducing num_workers to 0 and using the trace to inspect what's happening.
The load_checkpoint function in https://github.com/yl4579/StyleTTS2/issues/254 fixes my issue, thank @5Hyeons @