Luigi Piccinelli
Luigi Piccinelli
Thank you for your appreciation! The figure you are mentioning was produced in the following recipe: picking the attention map of the first iteration/attention layer (since the second layer is...
The first snippet works fine, and I guess you are returning depth_attn also from the `ISD` class, too, as a list of depth_attn for each resolution. The second part should...
Hey, thank you for the appreciation. The evaluation crop is usually quite important. Anyway, typically the validation RGB images correspond to the cropped validation images with shape (352,1216), you can...
Thank you for your appreciation. In my experience, the training loss is quite high, too. I would double check if the model is using the backbone pretrained on ImageNet, namely,...
You could try using the provided checkpoint and test it on your data/code and see if the results match the ones provided. If they match then the problem is the...
Honestly, I do not know, you are not seeing any overfitting, but it does not generalize either since the training metrics are good, but not the validation ones. Moreover, KITTI...
Thank you for using our model, I believe that the main problem is that `idisc` is not in the pythonpath. You should do something like `export PYTHONPATH=":${PYTHONPATH}"` before running the...
Thank you for your appreciation. I updated docs and splits with the corresponding normals for NYUv2 dataset.
We do not normalize the depth, we directly predict metric values also at training time. This means that no sigmoid trick is used to squeeze depth prediction in [0,1]. Therefore...
Thanks for using our work! Which is your input shape? (or the config you are passing to the mode, like pixels_bounds, etc..) To answer your question: the results may change...