NetShare
NetShare copied to clipboard
Gradients are backpropagated into Generator during Discriminator Training
Hello, as far as I understand the code, it seems like during the training loop of Doppelganger, the gradients from the Generator forward step are not detached, meaning they get updated when the discriminator loss gets backpropagated. However Gen and Disc should be trained separately according to the Doppelganger architecture.
https://github.com/netsharecmu/NetShare/blob/af026037a88db486069209e2258e11c2df1b93e2/netshare/models/doppelganger_torch/doppelganger.py#L526-L532
Proposed fix:
with torch.no_grad():
fake_attribute, _, fake_feature = self.generator.forward(
real_attribute_noise,
addi_attribute_noise,
feature_input_noise,
h0,
c0,
)