Darcy
Darcy
Theoretically, it works for resnet18. I wonder how do you get Res2Net pretrained models, trained from scratch on ImageNet for 100 epoches?
Hi, @JiawangBian , @hello7623 : I also meet the problem that I could not reproduce the results of depth model in paper. The result of my reproduced model is far...
> Hi, @mli0603 . I'm curious about why the data format of KITTI and SCARED is different, because both are provided with stereo GT depth. In your code, KITTI ones...
OK, I get it. The GT disparity of [Relative Response Loss paper](https://arxiv.org/abs/2003.00619) is obtained by SFM automatically, so they call it _self-supervised training scheme_. But yours is provided by the...
Thank you for your rely. I have tried smaller learning rates, starting from 0.3 and gradually decreasing to 0.01, but it still results in NaN values. As the learning rate...
Here is a simple example (using GP for super-resolution). I use the pixel coordinates XY of the image as `train_x` and RGB values as `train_y`. Everything works fine when I...
Hi @pablospe ! I met with the same problem. If I change the `cx` or `cy` of `intrinsic_matrix` in **view_point.json**, the rendered image doesn't change with `cx` or `cy`. Logically,...
Thank you for your prompt response. I will give it a try ! :smiley:
@GaoLei0 @KindXiaoming I don't know if the current code (version 0.2.3) supports reproducing the results of Example 3 Deep formula. After downloading and executing the code, the results are as...
However, I’ve tried dozens of different seeds on versions 0.2.3 and 0.2.4, but I haven’t been able to reproduce similar results. There are always redundant functions, and the loss never...