PonderV2
PonderV2 copied to clipboard
DTU image synthesis experiment
Hi,
Thank you for the wonderful work. I was wanting to replicate your DTU image synthesis from point cloud experiments. My questions were as follows :
- You mentioned using DGCNN for obtaining point features, and not the sparse encoder. Is there a reason behind this?
- How did you normalize the data for DTU? I was interested to use pretrained PonderV2 to obtain point features for Colmap point clouds in DTU, could you please give me some pointers as to how I should go about processing the point clouds to obtain the right features using PonderV2?
Thanks in advance
Hi,
-
We chose DGCNN because we needed a point encoder capable of providing per-point features, and DGCNN fits this requirement well. However, using another encoder, such as the sparse encoder you mentioned, should also be fine.
-
For point feature extraction, we applied the same normalization method used in the pre-training process if my memory serves me correctly). I recommend following this normalization approach to obtain the most suitable features for your application.