NetShare icon indicating copy to clipboard operation
NetShare copied to clipboard

Gradients are backpropagated into Generator during Discriminator Training

Open D-VR opened this issue 4 months ago • 0 comments

Hello, as far as I understand the code, it seems like during the training loop of Doppelganger, the gradients from the Generator forward step are not detached, meaning they get updated when the discriminator loss gets backpropagated. However Gen and Disc should be trained separately according to the Doppelganger architecture.

https://github.com/netsharecmu/NetShare/blob/af026037a88db486069209e2258e11c2df1b93e2/netshare/models/doppelganger_torch/doppelganger.py#L526-L532

Proposed fix:

                        with torch.no_grad():
                            fake_attribute, _, fake_feature = self.generator.forward(
                                real_attribute_noise,
                                addi_attribute_noise,
                                feature_input_noise,
                                h0,
                                c0,
                            )

D-VR avatar Sep 04 '25 18:09 D-VR