Adrian Spurr
Adrian Spurr
Yes, the trick states that you should train D on one mini-batch of only real samples and one mini-batch of only synthetic samples. Why this performs better, I do not...
@shuzhangcasia Train D(positive)->Train D(negative) -> Train G makes more sense, as you're training first D completely and then G can learn from D and I haven't seen the first order...
This is probably because you are using a newer pytorch version. Can you try using out_features.float()?
Using no hand side invariance and no scale invariance results in the model attempting to recognize which hand side you are showing, as well as the actual scale. It is...
Any update on if this will be implemented soon?