PonderV2 icon indicating copy to clipboard operation
PonderV2 copied to clipboard

DTU image synthesis experiment

Open Shubhendu-Jena opened this issue 1 year ago • 1 comments

Hi,

Thank you for the wonderful work. I was wanting to replicate your DTU image synthesis from point cloud experiments. My questions were as follows :

  1. You mentioned using DGCNN for obtaining point features, and not the sparse encoder. Is there a reason behind this?
  2. How did you normalize the data for DTU? I was interested to use pretrained PonderV2 to obtain point features for Colmap point clouds in DTU, could you please give me some pointers as to how I should go about processing the point clouds to obtain the right features using PonderV2?

Thanks in advance

Shubhendu-Jena avatar Jul 24 '24 09:07 Shubhendu-Jena

Hi,

  1. We chose DGCNN because we needed a point encoder capable of providing per-point features, and DGCNN fits this requirement well. However, using another encoder, such as the sparse encoder you mentioned, should also be fine.

  2. For point feature extraction, we applied the same normalization method used in the pre-training process if my memory serves me correctly). I recommend following this normalization approach to obtain the most suitable features for your application.

dihuangdh avatar Jul 25 '24 17:07 dihuangdh