Oisin Mac Aodha
Oisin Mac Aodha
Hi David, Great. I've actually got a tool under development at the moment, but it does not support this code base - it is for the new tools we are...
Hi all. Apologies for the slow response. The Kaggle servers for each competition (i.e. [2017](https://www.kaggle.com/c/inaturalist-challenge-at-fgvc-2017/), [2018](https://www.kaggle.com/c/inaturalist-2018), [2019](https://www.kaggle.com/c/inaturalist-2019-fgvc6), and [2021](https://www.kaggle.com/c/inaturalist-2021)) are still online so it should be possible for you to...
@andrewliao11 to clarify, are you trying to find user_ids for the iNat competition datasets or for the data used in "Lean Multiclass Crowdsourcing". @gvanhorn38 do you have any insight on...
Hi there. The annotations files indicate the corresponding license for each image. Check [here](https://github.com/visipedia/inat_comp/tree/master/2021#annotation-format) for more info in the context of the 2021 dataset. All the best, Oisin
Thanks for the reply! I'll let you know if I look into this.
Yes. For each pixel we predict a single scalar value specifying the disparity (i.e. scaled inverse depth). The "ground truth" disparity would point to the corresponding pixel in the second...
dl is aligned with the left image and dr with the right. We use backward mapping (B in the attached image) to reconstruct the right image from pixels in the...
With a known focal length and camera baseline you can convert disparity to depth. Both of these quantities are available in the KITTI stereo dataset. https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_calib3d/py_depthmap/py_depthmap.html
https://github.com/mrharicot/monodepth/issues/49#issuecomment-330595397
Closing based on lack of activity. Please reopen if the issue has not been resolved.