Tanks & Temples Dataset and Video Generation
Thanks for sharing your excellent work. Could you please provide the used TT dataset? By the way, would you like to share how you generate the displayed video?
Thank you for your interest in our work! I have uploaded the TnT dataset to Google Drive, and you can access it here. As mentioned in our Limitations part, our method utilizes a vanilla MLP, which will struggle to capture the complete geometry in relatively larger or more complex scenes.
For video generation, we utilized the approach from the MonoSDF repository, available here. The process involves: 1) obtaining a sparse trajectory, 2) interpolating between these specific camera positions to generate a continuous trajectory, 3) rendering images based on these camera positions, and 4) compiling these rendered images into a video.
Thanks for your kind reply. How could I obtain the visualization results like Figure 6-7?
For scenes with good 3D meshes, like Replica, we render them into 2D images using the target camera extrinsics and intrinsics, similar to the approach in this script. For our generated 3D edges, we first save them as point clouds and render them in a similar way.
For scenes without complete 3D meshes, like TnT, we first select a target RGB image and then adjust the camera parameters to render the generated 3D edges into 2D images, aligning them with the selected RGB image.
@rayeeli Could you please share the code? I try to visualize the ply file using open3d with meta.json and I can only obtain points not lines for DTU and Replica datasets.