PonderV2
PonderV2 copied to clipboard
A question about "ScanNet Test Result"
Hi, thank you so much for sharing your great work. I want to build a new job in your code. I found that the checkpoint you released can reproduce 77.0% mIou on the ScanNet validation set. However, when I submitted it to the ScanNet official website (3D Semantic Label Benchmark), it was only 73.9% mIou (78.5%mIou). Is there any problem with my submission? I followed the following method to run the test set results:
- In semseg-ppt-v1m1-0-sc-s3-st-spunet-lovasz-ft.py, I replaced the original "val" in the test part with "test".
- After I run ponderV2, I package the submit folder in the result folder as submit.zip.
This way, I only get 73.9% mIoU. May I ask if I did something wrong in that step, or did I miss any trick?
Look forward to your reply, it is very important to me, thank you!