Explanation Accuracy on MUTAG
Dear authors,
thanks for your interesting work and sharing the code! I have tried the code and encountered the problem when using GraphSVX on the MUTAG dataset. I have only reached the explanation accuracy of 0.1. I trained a GCN/GNN on MUTAG using your training script, but my model also has lower accuracy (10% lower) than your result reported in the paper. So it could be that my model is not well trained.
Could you please provide the hyperparams that you used for training the model, as well as the evaluating in explanation for MUTAG?
Thank you very much for your help in advance!
Best,
Hi,
Thank you for your message.
I just went back to my notes to find the good hyper-parameters of MUTAG. Unfortunately, I added MUTAG right before the submission and was not organised enough with my experiments at the time, so I could not find much… What I remember is that MUTAG was trickier to evaluate as ground truth explanations relies on edges (not nodes, unlike the default version of GraphSVX). So I struggled a bit more to get good results on this dataset, unlike the others.
I found those settings marked somewhere but I don’t know if these are SOTA results or intermediate ones: python3 script_eval_gt.py --dataset='Mutagenicity' --num_samples=300/500 --S=3 --coal='SmarterSeparate' --hv='compute_pred' --feat=‘All' Training ⇒ 800 epochs, wd = 0.002. Loss = 0.28. Results = 0.88 0.84 0.81 (not exactly SOTA but closer than what you found I believe).
I am sorry I am not able to help out more. Good luck !
Best wishes, Alexandre
On 18 Jul 2022, at 04:58, yaorong0921 @.***> wrote:
Dear authors,
thanks for your interesting work and sharing the code! I have tried the code and encountered the problem when using GraphSVX on the MUTAG dataset. I have only reached the explanation accuracy of 0.1. I trained a GCN/GNN on MUTAG using your training script, but my model also has lower accuracy (10% lower) than your result reported in the paper. So it could be that my model is not well trained.
Could you please provide the hyperparams that you used for training the model, as well as the evaluating in explanation for MUTAG?
Thank you very much for your help in advance!
Best,
— Reply to this email directly, view it on GitHub https://github.com/AlexDuvalinho/GraphSVX/issues/5, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKMBBL4CZLWXPNXF3SO35RLVUUME5ANCNFSM533PNUSA. You are receiving this because you are subscribed to this thread.
Hi, Thank you for your message. I just went back to my notes to find the good hyper-parameters of MUTAG. Unfortunately, I added MUTAG right before the submission and was not organised enough with my experiments at the time, so I could not find much… What I remember is that MUTAG was trickier to evaluate as ground truth explanations relies on edges (not nodes, unlike the default version of GraphSVX). So I struggled a bit more to get good results on this dataset, unlike the others. I found those settings marked somewhere but I don’t know if these are SOTA results or intermediate ones: python3 script_eval_gt.py --dataset='Mutagenicity' --num_samples=300/500 --S=3 --coal='SmarterSeparate' --hv='compute_pred' --feat=‘All' Training ⇒ 800 epochs, wd = 0.002. Loss = 0.28. Results = 0.88 0.84 0.81 (not exactly SOTA but closer than what you found I believe). I am sorry I am not able to help out more. Good luck ! Best wishes, Alexandre …
Dear author,
Thanks for your sharing work,
Could you be more specific on the training hyper-parameters for MUTAG, such as lr, weight clip, weight decay, and batch size. The Best Accuracy I could get from the repo is about 80%.
Thanks!