Binyr

Results 9 issues of Binyr

Hi, Thanks for your code! The 'scale' and 'objpos' is also not provided offictially. How do you generate them? Expect for your guidance! Thank you!

Hi, Thanks for your code! But I just found original LSP dataset with 2000 images in the official website. I am very glad for your guidance!

Hi! Thanks for providing such a wonderful work. I have read your code and visualize results.I observe that two almost the same boxes have very different cls_score. I wonder if...

Hi @nihalsid, When I tried to train a model for hypersim ai_001_008 on the data generated by myself, I got a little worse mIoU 0.681 than the model (mIoU: 0.693)...

Hi, This line will sample a torch.float32 sigmas.(https://github.com/pixeli99/SVD_Xtend/blob/609fbf92a3e39a78ed1e2a9e0764a1e912103c48/train_svd.py#L964). As a result, inp_noisy_latents is also converted to torch.float32 (https://github.com/pixeli99/SVD_Xtend/blob/609fbf92a3e39a78ed1e2a9e0764a1e912103c48/train_svd.py#L972). The same problem happens to (https://github.com/pixeli99/SVD_Xtend/blob/609fbf92a3e39a78ed1e2a9e0764a1e912103c48/train_svd.py#L969). Is this torch.float32 type tensor necesaary?...

Hi, thanks for your great work! I download your dataset and find vkitti normal is generated using "discontinuity-aware plane fitting". Does "discontinuity-aware plane fitting" implemented here? https://github.com/baegwangbin/DSINE/blob/main/utils/d2n/plane_svd.py Thank you! Yanrui

Dear auther, Thanks for your great work! The loss weighing here (https://github.com/pixeli99/SVD_Xtend/blob/609fbf92a3e39a78ed1e2a9e0764a1e912103c48/train_svd.py#L1028C17-L1028C25) is different from the weight used in EDM. Can you explain this line for me. Thank you, Yanrui

Hi, thanks for your great work! I wonder how to evaluate your model on normal dataset. It seems there are only scrpts for depth dataset. Thank you, Yanrui

太牛了,这是我见过的最全,也最感性的讲解了,非常想认识博主