Shiyu
Shiyu
Did you split your panel vertically?
Hi @maaft, did you manage to achieve a similar result as the claimed? I tried both the way of CFlow and DifferNet but still far below the performance in the...
I think `PermuteRandom` is actually a more flexible lower-half/upper-half alternating so essentially, I don't feel big difference. I was also trying to figure out if it's a coupling block or...
@maaft - I think the "first 3 blocks" for resnet18 means stride4x, 8x, and 16x, so the channel numbers should be (64, 128, 256). See table 6. - In fact,...
@AlessioGalluccio In CFlow, there's an exponential converting `logp` to `p` as well. It's the same if there's only one feature level (such as DeiT and CaiT). But if there are...
And for `logp = C * _GCONST_ - 0.5*torch.sum(z**2, 1) + logdet_J`. Does it make sense if we only reduce `dim=1` when doing `sum` on `logdet_J`? I subclassed `AllInOneBlock` to...
@mjack3 As for ActNorm, I simply moved the `_permute` of `AllInOneBlock` to the beginning of `forward` and removed the original ones. I don't think this is the root cause of...
In my case, torch==1.4.0+cu92 and lmdb==0.98
I tried without `max_readers=1` but it doesn't change anything. Do you think it's because started the program with mp.spawn so that it's run in a multiprocess context?
You can get the loss value from `ret` here with `ret["loss"]` https://github.com/gathierry/FastFlow/blob/37c99ae33b8db0e5cb534fd0d197b0bc44817a03/main.py#L96 Then build another `loss_meter` in `eval_once` function to aggregate the losses.