IntroVAE
IntroVAE copied to clipboard
A pytorch implementation of Paper "IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis"
 I think this should be indistinguishable. isn't it? I was little bit confused by this typo.
https://github.com/hhb072/IntroVAE/blob/c8ce5d291fe8e66189d70b3ceddc2eb2266d3742/main.py#L191 It seems more natural of the converse case, i.e., relu(...).mean().
In your paper and your code, the KL term (regularization term) is exactly negative KL divergence of the approximate posterior with the prior, while in VAE it should be positive....
Would like to knwo if you will release the pretrained model, if will help a lot. Thank you!
How to understand the para `weight_neg, weight_rec, weight_kl`? What is the relationship between them and `alpha, beta` in Introvae paper?
When I run IntroVAE on 1 GPU (to test how it works on my [anime faces](https://www.gwern.net/Faces#introvae)), I get indexing/scalar errors from PyTorch (`TypeError: only integer scalar arrays can be converted...
on https://github.com/hhb072/IntroVAE/blob/master/networks.py line 40 `output = self.relu2(self.bn2(torch.add(output,identity_data)))` I see the normal ResBlock code like this form: `output = self.relu2(torch.add(self.bn2(output),identity_data))` Why Add before batchNorm?