Glory Chen
Glory Chen
This seems to happen only when trained on the latest GPU like A5000...
Also here MLP to a relatively large resolution startup. https://github.com/ParthaEth/Regularized_autoencoders-RAE-/blob/9478b8f781f7229807a0d7c4ea92a7c9c7994bfa/models/rae/rae_celeba.py#L124
You are required to create it by yourself.
I see. Btw, should not we only max along the dim of samples z? https://github.com/XavierXiao/Likelihood-Regret/blob/5517c9bac5992b116e55bb61cad8171e0d585063/compute_LR.py#L64
PyTorch Lightning I used is 0.8.0. It matches calling codes like ```python experiment = VAEXperiment.load_from_checkpoint(config['resume_path'], **{'vae_model': model}) ``` The need for passing `model` as an argument is a model can...
Actually I saw there is a `detach` statement but in the annotation. https://github.com/AntixK/PyTorch-VAE/blob/8700d245a9735640dda458db4cf40708caf2e77f/models/iwae.py#L152
Besides, as the original paper said, "Vanilla VAE separated out the KL divergence in the bound in order to achieve a simpler and lower-variance update. Unfortunately, no analogous trick applies...
@mkocabas @bvoq @oli4jansen @wolterlw et al. Same here. I believe there might be bugs in `demo`? Humans look smaller than image evidence even for the 3DPW dataset. But the demo...
Bugs are found below. Simply solve it by scaling `1.1x bboxes[:, 2:]`. https://github.com/mkocabas/VIBE/blob/851f779407445b75cd1926402f61c931568c6947/demo.py#L82
> You don't have to train longer. Could you let me know any modifications you made on top of the released codes? Hi @mks0601 , I thought conventional training epochs...