adv_loss keep increasing during training?
I just set lambda-adv-target1 as 0.00 and lambda-adv-target2 as 0. 001 to train the single-level model. All parameters are the same as those set by the author. But I find that adv_loss keep increasing while others decreasing during training. I suppose that the adv_loss should decrease according to the code below, right? Could you help me to find out what the problem is? @wasidennis Looking forward to your reply. Thank you:)
`
_, batch = targetloader_iter.__next__()
images, _, _ = batch
images = images.to(device)
pred_target1, pred_target2 = model(images)
pred_target1 = interp_target(pred_target1)
pred_target2 = interp_target(pred_target2)
D_out1 = model_D1(F.softmax(pred_target1))
D_out2 = model_D2(F.softmax(pred_target2))
loss_adv_target1 = bce_loss(D_out1, torch.FloatTensor(D_out1.data.size()).fill_(source_label).to(device))
loss_adv_target2 = bce_loss(D_out2, torch.FloatTensor(D_out2.data.size()).fill_(source_label).to(device))
loss = args.lambda_adv_target1 * loss_adv_target1 + args.lambda_adv_target2 * loss_adv_target2
loss = loss / args.iter_size
loss.backward()`

@VE-yyq The adversarial loss is supposed to increase in the min-max optimization, while the loss for the discriminator needs to decrease. The trend in your log is correct.
@VE-yyq 可以分享一下您的完整训练曲线嘛?我邮箱[email protected]