About the test result in Table 2.
Thank you for your excellent work!
I want to ask about the test result of Table 2: Results for novel view synthesis on the Google Scanned Objects (GSO) dataset. The results reported in the paper is slightly lower than the results in paper "SYNCDREAMER: GENERATING MULTIVIEW CONSISTENT IMAGES FROM A SINGLE-VIEW IMAGE". And I checked the provided testset. I found that the testset is the same. Why did such a difference occur?
Hi @handsomeli1898 ! Thanks for checking out our work!
We report 100 randomly sampled results from GSO only (due to compute limitations), and therefore the numbers might be slightly different.
Hi @handsomeli1898 ! Thanks for checking out our work!
We report 100 randomly sampled results from GSO only (due to compute limitations), and therefore the numbers might be slightly different.
I download the GSO dataset that you provided. The dataset has 30 object. I don't understand what "100 randomly sampled results" means? Is that for each object, you tested 100 times and average the results? Or you randomly choose 100 times (On average, each object is selected 3 times) ?
Sorry about the confusion! For Table 4. where we evaluate chamfer distance, we use the 30 GSO objects from syncdreamer. For Table 2. where we evaluate novel view synthesis performance via image similarity metrics, we use 100 objects from the full GSO dataset.
For each object we evaluate on 15 fixed views similar to syncdreamer given 1 input view.