wgan loss is right?
https://github.com/wiseodd/generative-models/blob/c790d2c8add3d600c03ee9a301a969e10ccd7562/GAN/wasserstein_gan/wgan_tensorflow.py#L82-L83
in the code , you define the discriminator loss is :
D_loss = tf.reduce_mean(D_real) - tf.reduce_mean(D_fake)
however the generator loss is:
G_loss = - tf.reduce_mean(D_fake)
i think the G_loss maybe G_loss = tf.reduce_mean(D_fake), we should remove the negative sign。
according to the original paper algrithom, the loss following:
D_loss = - tf.reduce_mean(D_real) + tf.reduce_mean(D_fake)
G_loss = - tf.reduce_mean(D_fake)
Hi, it is because in this line there is negative sign. https://github.com/wiseodd/generative-models/blob/c790d2c8add3d600c03ee9a301a969e10ccd7562/GAN/wasserstein_gan/wgan_tensorflow.py#L86
Please do not close this issue as I'd like to keep this open, so other can see in the future.
Any reason for specifying the loss like this, and minimizing the negative of this? These three options should all be equivalent, correct?:
- As currently implemented
- As suggested in the first post, to remove the minus sign in the generator loss, and to then let the both minimize the defined losses (without any minus signs)
- The loss as currently defined, but to let the discriminator maximize D_loss (without the minus).
Or are there any practical differences between these 3 options?