why the gLoss increasing when training
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = Dg, labels = tf.ones_like(Dg))) trainerG = tf.train.AdamOptimizer().minimize(g_loss, var_list=g_vars) These codes seem to minimize g_loss for Generator to generlize nearly true pictures, but when I training, gLoss returned by _,gLoss = sess.run([trainerG,g_loss],feed_dict={z_placeholder:z_batch}) is increasing, while dLoss is decreasing as designed, why did this happen? ps. the results seem that generator did learned something
You can think of the generator and the discriminator as playing a game against each other in which they seek to get "better" (i.e., minimize their respective losses) at the expense of the other. When D's loss decreases and G's increases, it indicates that the generator is having a harder time fooling the discriminator. You can try some of the suggestions here https://github.com/soumith/ganhacks and https://github.com/soumith/ganhacks/issues/14 and see if that makes a difference.
What if G_loss decreases and D_loss increases? and moreover if I train a GAN on lets say few 100 images of Mnist Data set and try to test my discriminator on one of the trained images, will my discriminator show the correct output? In other words, will it overfit if I have very small dataset?