Generative-Adversarial-Network-Tutorial icon indicating copy to clipboard operation
Generative-Adversarial-Network-Tutorial copied to clipboard

why the gLoss increasing when training

Open taneslle opened this issue 8 years ago • 2 comments

g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = Dg, labels = tf.ones_like(Dg))) trainerG = tf.train.AdamOptimizer().minimize(g_loss, var_list=g_vars) These codes seem to minimize g_loss for Generator to generlize nearly true pictures, but when I training, gLoss returned by _,gLoss = sess.run([trainerG,g_loss],feed_dict={z_placeholder:z_batch}) is increasing, while dLoss is decreasing as designed, why did this happen? ps. the results seem that generator did learned something

taneslle avatar Jun 07 '17 06:06 taneslle

You can think of the generator and the discriminator as playing a game against each other in which they seek to get "better" (i.e., minimize their respective losses) at the expense of the other. When D's loss decreases and G's increases, it indicates that the generator is having a harder time fooling the discriminator. You can try some of the suggestions here https://github.com/soumith/ganhacks and https://github.com/soumith/ganhacks/issues/14 and see if that makes a difference.

rohan-varma avatar Oct 21 '17 18:10 rohan-varma

What if G_loss decreases and D_loss increases? and moreover if I train a GAN on lets say few 100 images of Mnist Data set and try to test my discriminator on one of the trained images, will my discriminator show the correct output? In other words, will it overfit if I have very small dataset?

wadhwasahil avatar Jun 20 '18 20:06 wadhwasahil