audio-diffusion-pytorch icon indicating copy to clipboard operation
audio-diffusion-pytorch copied to clipboard

Unconditional model generates okay quality of fake human voice but failed on music.

Open piobmx opened this issue 2 years ago • 5 comments

Hi, I've been playing with this diffusion model library for a few days, it is great to have such library that allows common users to train audio data with limited resources.

I have a problem with regard to the training data and the output. I fed the unconditional model with mozilla's common voice dataset. I used only one language and the size is about 15k. I resampled them to 44.1k and padded them to 2^18 samples per file if shorter. And the unconditional results were okay, at least I could tell it's human speaking although never actually audible.

But when I replace the training data with music (mostly pure pianos, same sample rate but 2^17 samples per input tensor), the model is not generating outputs that sounds like piano, in fact they are mostly noise.

I used the same configurations for each layers for both models, tried lowering the downsampling factors or increase attentions heads, but no significant difference. Any tips on why my problem happens?

piobmx avatar Oct 26 '23 17:10 piobmx

Weirdly, this gets kinda improved after I use just the default Adam optimizer with 1e-1 lr without any other configurations.

piobmx avatar Oct 28 '23 22:10 piobmx

Sorry for the sudden question. I would like to know about the value of the loss, how did the loss converge? What was the initial value of the loss and how did it evolve?

0417keito avatar Dec 05 '23 13:12 0417keito

Sorry for the sudden question. I would like to know about the value of the loss, how did the loss converge? What was the initial value of the loss and how did it evolve?

The initial value could depend on many factors but the loss is supposed to drops like this

image

piobmx avatar Dec 05 '23 14:12 piobmx

Thank you.

0417keito avatar Dec 05 '23 14:12 0417keito