Experiments on NeRF-DS Datasets
Dear @xinggangw Hi, and thank you for sharing such an impressive piece of work! The Dynamic-2D GS is an incredible contribution to the field, and I deeply appreciate the effort you have put into making this research open and accessible.
I have been experimenting with Dynamic-2D GS using the NeRF-DS dataset, which consists of monocular static camera inputs. During my experiments, I noticed that the output normals and depth maps appear quite blurry, and sometimes the training even crashes unexpectedly.
I wanted to ask if you have attempted training on the NeRF-DS dataset yourselves. If so, is it expected to see such poor-quality normals and depth maps, or might there be an issue with my setup?
I would greatly appreciate any insights or suggestions you could provide. Thank you again for your excellent work, and I look forward to your response.
Best regards, Longxiang-ai
(results of "as" in nerf-ds dataset)
GT
Render
Depth
Depth Normal
Normal
Hi, thank you very much for your attempt on NeRF-DS dataset! I have not experimented on this dataset before. I personally feel that applying Dynamic-2dgs to monocular dynamic scenes is still very challenging and difficult. My previous attempts focused more on dynamic objects in multi-view scenes.
Dynamic-2dgs still has many problems and limitations, so we plan to further optimize Dynamic-2dgs. Thank you very much for your attempt, and we hope you can communicate with us more and provide us with more valuable suggestions!