vits
vits copied to clipboard
why not use multi-scale D and loss NAN
https://github.com/jaywalnut310/vits/blob/2e561ba58618d021b5b8323d3765880f7e0ecfdb/models.py#L369
such as and another question, sometimes train used FP16 may cause loss=NaN? How can I fix it? thanks