monosdf icon indicating copy to clipboard operation
monosdf copied to clipboard

about prepocess

Open UestcJay opened this issue 3 years ago • 10 comments

Hi,

Thanks for the great work ! because 384 is the size for Omnidata-model, the dtu image size is 1200x1600, if I want to use monocular cues with original size, can I first resize the 1200x1600 -> 1152x1536, then get the monocular cues and upsamle them to 1200x1600? looking forward to your reply!

UestcJay avatar Feb 02 '23 03:02 UestcJay

Hi, we also simply resize the monocular outputs to 1200x1200 with padding for dtu images with 1200x1600. You could check it here: https://github.com/autonomousvision/monosdf/blob/main/preprocess/paded_dtu.py. Other way to get high-resolution monocular priors can be found here https://github.com/autonomousvision/monosdf#high-resolution-cues.

niujinshuchong avatar Feb 02 '23 20:02 niujinshuchong

I still have some problems, is my method more convenient than the way in thepaded_dtu.py, because there is no need to modify the parameters of the camera?

UestcJay avatar Feb 03 '23 07:02 UestcJay

You could just try it out.

niujinshuchong avatar Feb 03 '23 09:02 niujinshuchong

How many experiments are averaged for the CD value on the DTU dataset reported in the paper?

UestcJay avatar Feb 13 '23 01:02 UestcJay

It's average over 15 scenes.

niujinshuchong avatar Feb 13 '23 10:02 niujinshuchong

Hi,

Thanks for the great work ! because 384 is the size for Omnidata-model, the dtu image size is 1200x1600, if I want to use monocular cues with original size, can I first resize the 1200x1600 -> 1152x1536, then get the monocular cues and upsamle them to 1200x1600? looking forward to your reply!

Helllo! I'd like to ask a question. Omnidata-model is trained with img_size 384. Can it support input at any image resolution such as 1152*1536? Thank you!

Wuuu3511 avatar Feb 15 '23 07:02 Wuuu3511

yes, as long as the length and width are multiples of 384.

UestcJay avatar Feb 15 '23 08:02 UestcJay

yes, as long as the length and width are multiples of 384.

Thank you very much for your reply! I try to use images 512 * 640 as input , Omnidata-model can also return a depth map which is 512*640. Picture of this size is not a multiple of 384. Does this result in a larger depth error?

Wuuu3511 avatar Feb 17 '23 13:02 Wuuu3511

Hello! I've got question here. I am wondering whether the resolution of rgb images, depth and normal cues will impact on the reconstruction result. If it will, and why? Thank you for your reply! My experiment result really confused me.

EugeneLiu01 avatar Feb 24 '23 09:02 EugeneLiu01

Hi, omnidata is not trained on large resolution images. So it's not clear whether it can generalise in this case and the reconstruction results might vary scene by scene.

niujinshuchong avatar Feb 24 '23 10:02 niujinshuchong