Wang Yan
Wang Yan
Hello, I have the same trouble with you. When using the WN is only the decomposition of W into g, v , the classification task is running normally (accurate rate...
G_loss_adv = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits( logits=d_fake_logit, labels=tf.ones_like(d_fake_logit)), name='g_loss') d_loss_pos = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits( logits=d_real_logit, labels=tf.ones_like(d_real_logit)), name='d_loss_real') d_loss_neg = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits( logits=d_fake_logit, labels=tf.zeros_like(d_fake_logit)), name='d_loss_fake') D_loss_adv = tf.add(.5 * d_loss_pos, .5 * d_loss_neg, name='d_loss') # about accuracy...
@DEKHTIARJonathan Thanks for your suggestion, I have changed the information.
@rafaelvalle Your suggestion is: d_loss_neg = tf.reduce_mean(1-tf.nn.sigmoid_cross_entropy_with_logits( logits=d_fake_logit, labels=tf.zeros_like(d_fake_logit)), name='d_loss_fake') Actually, d_fake_logit=D(G(z)) in my implemented, Input noise z through by D, its value should be relatively small close to 0,...
@rafaelvalle I'm a little bit understand your meaning, you mean that my D (G (Z)) is too large to reduce may be due to the disappearance of the gradient, so...