vae icon indicating copy to clipboard operation
vae copied to clipboard

2nd element of recon_loss?

Open YongWookHa opened this issue 7 years ago • 3 comments

Hello, I'm a student studying deep learning. First of all, Your code is really helpful to learn about VAE. Thank you very much.

I've got a question. I'm curious about the reason that you put log(2pi) to the 2nd element recon_loss.

Thank you for the answer in advance. Have a good day.

YongWookHa avatar Nov 07 '18 08:11 YongWookHa

Hi, @YongWookHa were you able to figure the logic behind the reconstruction loss in vae_keras_celeba.py?

recon_loss = 0.5 * K.sum(K.mean(x_out**2, 0)) + 0.5 * np.log(2*np.pi) * np.prod(K.int_shape(x_out)[1:])

moha23 avatar Dec 02 '19 06:12 moha23

Hello, @moha23. An year passed! :)

x_out = Subtract()([x_in, x_recon])
recon_loss = 0.5 * K.sum(K.mean(x_out**2, 0)) + 0.5 * np.log(2*np.pi) * np.prod(K.int_shape(x_out)[1:])

As I think, in recon_loss, 0.5 * K.sum(K.mean(x_out**2, 0)) refers MSE. And added value of 0.5 * np.log(2*np.pi) * np.prod(K.int_shape(x_out)[1:]) is a constant value. The code np.prod(K.int_shape(x_out)[1:]) calculates H x W x C. I think this constant value works like a bias but not that meaningful.

So, I guess you would be able to get similar result without the latter code. I'm sorry that i am not in a situation to simulate my theory. I will put it off to you.

Have a nice day.

YongWookHa avatar Dec 02 '19 08:12 YongWookHa

Thanks @YongWookHa! Yes that's the direction I was going too 👍

Wishing another fruitful year ahead :)

moha23 avatar Dec 02 '19 09:12 moha23