interfacegan icon indicating copy to clipboard operation
interfacegan copied to clipboard

About layer-wise manipulation and pre-trained GAN

Open gluucose opened this issue 4 years ago • 3 comments

Hi, I'm reading your paper recently and have the following questions.

  1. In your paper, you use StyleGAN and PGGAN in the experiments, and I wonder if InterfaceGAN can manipulate any pre-trained GAN (I am using a DCGAN written by myself).
  2. What does 'LATENT_CODE_NUM=10' (in How to Use part of this repo) means?
  3. In this paper, you use StyleGAN to conduct layer-wise manipulation. How to train a boundary layerwise? And how to use it to only vary the latent codes that are fed to particular layers?

gluucose avatar Apr 27 '21 16:04 gluucose

  1. Yes, as long as your GAN model employs a latent space. The DCGAN structure is very similar to PGGAN. The major difference is the training pipeline (end-to-end v.s. progressive). But once the model is well prepared, they should work similarly at the inference stage.
  2. It defines how many latent codes (samples) you want to visualize to check the performance.
  3. Train only one boundary in the W space and all layers share the same boundary. For layer-wise manipulation, please refer to this repo.

ShenYujun avatar Apr 28 '21 05:04 ShenYujun

Thanks for your reply! And I'm reading your HiGAN paper. I have one more question. After you conduct z_edit=z+αn semantic walk, how can you generate the images of latent code z_edit (is it by sending the z_edit to generator of the pre-trained GAN)?

gluucose avatar Apr 29 '21 11:04 gluucose

Actually, HiGAN applies the manipulation in w space instead of z. And yes, feeding the edited w code into the pretrained generator results in the corresponding image.

limbo0000 avatar Apr 29 '21 13:04 limbo0000