Train using depth ground truth not disparity
we want to train AnyNet to calculate the loss on depth not disparity as it will be more efficient as discussed in Pseudo Lidar to use AnyNet with its high performance in time to use 3D object detection approach
can we train AnyNet model for with loss depth-based ?
@AhmedMoamen62 have you found a solution? I'm facing the same problem
@KhaledSharif until yet, no
@AhmedMoamen62 I looked into this a bit and I think it's not possible. If you check the _build_volume_2d3 function in anynet.py you will see it uses a function called warp, which warps the right image onto the left during training. Therefore the neural network is built for disparity images only. Looking forward to hearing the author's thoughts too.