pytorch-normalizing-flows icon indicating copy to clipboard operation
pytorch-normalizing-flows copied to clipboard

Different performance when using SlowMAF and MAF

Open XuebinZhaoZXB opened this issue 5 years ago • 1 comments

Hi there, I have been using your Normalizing Flows codes for variational inference. When I use the two different flows: SlowMAF and MAF, the quality of these two results are very different (Slow version is much better than the Masked version). I guess it may be caused by the inproper use of MADE, which I set self.net = MADE(in_dim, [hidden_dim, hidden_dim, hidden_dim], out_dim, num_masks=1, natural_ordering=True) with simply 1 masks using the original order. All the other hyperparameters are stayed unchanged. But I observed the computational efficiency is much improved.

So can you tell me what details did I miss for the result?

Much appreciated, Xuebin.

XuebinZhaoZXB avatar Apr 24 '20 16:04 XuebinZhaoZXB

I guess the reason for it is just the number of parameters. SlowMAF use a FC network for each output pair, while MAF use one masked FC network for all output. So, naturally, given same hyperparametes, SlowMAF would have a better quality

MaximArtemev avatar Apr 30 '20 10:04 MaximArtemev