Haiyang Yu
Haiyang Yu
Hello, thank you for pointing it out. I have uploaded the checkpoints and results to the directory. Many thanks. For the different sparsity level of `SubgraphX` results, you can change...
Hello, I think it's fine to update the `captum`. But since some APIs may change across different version, 0.2.0 can make sure that this problems won't happen.
Hello, currently the DIG is based on the `torch_geometric` and the explainability relies on the [MessagePassing](https://pytorch-geometric.readthedocs.io/en/latest/_modules/torch_geometric/nn/conv/message_passing.html#MessagePassing) class. It's a pity that the dig explainability can't be applied to dgl model...
Yes, I think it is a hard work to transfer dig explainability to dgl, and current implementation is not for GNN considering edge features like `RGCN`.
Hello, refer to [GNNExplainer](https://arxiv.org/pdf/1903.03894.pdf) and [MUTAG paper](https://pubs.acs.org/doi/10.1021/jm00106a046), carbon ring as well as chemical groups NH2 and NO2 is the reason for mutagenic. However, they exist in molecules with both label...
Hello, you can refer to our new tutorial materials, and it provides example to visualize results on `ba_shapes` dataset. https://github.com/divelab/DIG/blob/dig-stable/tutorials/KDD2022/xgraph_code_tutorial.ipynb
Hello, currently the implemented explanation algorithms are applied on homogeneous graph. Therefore, it is a pity that they can't be applied to heterogeneous GNN without modification.
I am not sure whether these explainability methods will work or not since these methods are not designed for hetero GNN. Welcome to provide some insights if you find papers...
Hello, could you provide the GCN and PGExplainer model configurations? In addition, have you tried other explanation methods for this GCN model and what's their results?
Hello, I am unsure about this problem. Due to the situation all the edges are masked out, it seems like the scores for all the edges are similar or the...