confused about the eval resuts
Hi, @niladridutt. Thanks for updating eval code, I have reproduced the results.
But the "err" of the result is confused. I don't know how to be clonse to the results in the paper.
As you can see, the acc is close to the result in the paper, but err is so far away.
Looking forward to your reply.
Hi @Mecel1147
I have updated the code to add the missing "setup_args.py" file, maybe it can fix the issue with the error calculation? The eval scripts are also updated to display the key metrics now to avoid checking the csv.
Also, make sure you use the remeshed SHREC'19 dataset (https://nuage.lix.polytechnique.fr/index.php/s/LJFXrsTG22wYCXx/download?path=%2F&files=SHREC_r.zip)
Closing this since related works have been able to reproduce the results.