How many steps would be enough if i train this model from start?
Hi! Nice work! Could you share how many steps would be sufficient to train a new model? I'm trying to train a 16k FAcodec. The results reconstructed by ckpt 130,000 still sound different from the real speech, especially for the speaker timbre.
Here is the loss curve:
The model released was trained for 670k steps, normally 400k would be sufficient for codec, according to descript-audio-codec's practice
The model released was trained for 670k steps, normally 400k would be sufficient for codec, according to descript-audio-codec's practice
Thanks!
I have trained the model on voxceleb2 for 400K steps. However, the reconstructed speech sounds not as good as the demo page and the reconstructed result of the noisy speech sounds even worse. Here are the samples: O1: https://github.com/lixuyuan102/FAcodec/blob/master/ZCwVV3niXxo_00179.m4a R1: https://github.com/lixuyuan102/FAcodec/blob/master/ZCwVV3niXxo_00179.wav O2: https://github.com/lixuyuan102/FAcodec/blob/master/Zsus9yFgaJM_00132.m4a R2: https://github.com/lixuyuan102/FAcodec/blob/master/Zsus9yFgaJM_00132.wav
Is there a problem with the data scale or something else?
I have checked the samples you shared. One thing I am noticing is your samples sound quite noisy. I don't know whether they are from your train set or not, but I don't suggest to include anything else except clean vocal data, as FAcodec is designed for speech instead of a universal audio codec. If your speech data for training has not gone through a vocal separation process, it may indeed affect model performance.
I have checked the samples you shared. One thing I am noticing is your samples sound quite noisy. I don't know whether they are from your train set or not, but I don't suggest to include anything else except clean vocal data, as FAcodec is designed for speech instead of a universal audio codec. If your speech data for training has not gone through a vocal separation process, it may indeed affect model performance.
Tks, I'll try to process my training data with denoise and separation model.
I found that although the training data has been denoised, tuning the pertained Facodec on this still results in unstable pronunciation. Moreover, the unstable pronunciation seems more significant if the original audios sound poorer. Can I just tune the timbre module and freeze the other parts to make it adapt to new speakers?
BTW, I find that only the content, prosody, and timbre latent features are used when training the Facodec Redecoder. May I ask why the z_r is not employed?