A question about the symmetric camera locations
Thanks for the wonderful work!
Following up on https://github.com/cvg/pixel-perfect-sfm/issues/109#issuecomment-1632133453 discussion.
I want to ask if LIMAP can estimate the poses for the low-textured objects. I use colmap_triangulation.py on colmap data, and I see wrong poses are estimated for 360 objects.
For example: For this cube data, I have the following results:
https://github.com/cvg/limap/assets/8401456/41113e1d-53e1-4e65-996f-915cfa1d1c06
Where the poses/camera locations should be 360 views around the cube. Would you happen to have any tips I can use to get better pose estimation?
P.S. The current cube information in sparse/0 is generated using Pixel-Perfect-SfM.
I really appreciate any help you can provide.
I was wondering if there are any updates on this. Would love to hear back from you.