Z
Z
Hi! Thanks so much! train.py is complete. Feel free to use it with the mvd_train.yaml in the configs, but you may need to adjust the paths of the weights and...
Hey! This code is from the original objaverse github, here is a slightly modified version. [a_blender_render.txt](https://github.com/user-attachments/files/16043045/a_blender_render.txt)
Hi @fangfang11-plog ! Thanks for checking out our paper! Please download the training data from https://huggingface.co/datasets/allenai/objaverse :)
Hi @handsomeli1898 ! Thanks for checking out our work! We report 100 randomly sampled results from GSO only (due to compute limitations), and therefore the numbers might be slightly different.
Sorry about the confusion! For Table 4. where we evaluate chamfer distance, we use the 30 GSO objects from syncdreamer. For Table 2. where we evaluate novel view synthesis performance...
Hi @Mrguanglei, sorry for the delay, the dataset format is common across dataloaders, we use Pytorch3D cameras. https://github.com/zhizdev/mvdfusion/blob/main/dataset/objaverse.py#L115C1-L126C10
you can modify the number of images or finetune only the cross attention layers to make it fit in GPU memory
Hi! We perform two post-processing methods. First we mask the pixels we unproject via a threshold on the RGB image. Second we remove outliers, below is a sample function. ```...
If you are loading depth from `.png` [0-1], then you would need to unscale the values. If you are using `pred_depth` from `model_output[:,4:,...]` as in `test.py`, you would need to...
Hi @YZsZY thanks for checking this out and posting the findings! We don't have enough information to comment on your findings atm but one thing that is true is that...