interfacegan
interfacegan copied to clipboard
About layer-wise manipulation and pre-trained GAN
Hi, I'm reading your paper recently and have the following questions.
- In your paper, you use StyleGAN and PGGAN in the experiments, and I wonder if InterfaceGAN can manipulate any pre-trained GAN (I am using a DCGAN written by myself).
- What does 'LATENT_CODE_NUM=10' (in How to Use part of this repo) means?
- In this paper, you use StyleGAN to conduct layer-wise manipulation. How to train a boundary layerwise? And how to use it to only vary the latent codes that are fed to particular layers?
- Yes, as long as your GAN model employs a latent space. The DCGAN structure is very similar to PGGAN. The major difference is the training pipeline (end-to-end v.s. progressive). But once the model is well prepared, they should work similarly at the inference stage.
- It defines how many latent codes (samples) you want to visualize to check the performance.
- Train only one boundary in the
Wspace and all layers share the same boundary. For layer-wise manipulation, please refer to this repo.
Thanks for your reply! And I'm reading your HiGAN paper. I have one more question. After you conduct z_edit=z+αn semantic walk, how can you generate the images of latent code z_edit (is it by sending the z_edit to generator of the pre-trained GAN)?
Actually, HiGAN applies the manipulation in w space instead of z. And yes, feeding the edited w code into the pretrained generator results in the corresponding image.