Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ProtGNN: Towards Self-Explaining Graph Neural Networks

About

Despite the recent progress in Graph Neural Networks (GNNs), it remains challenging to explain the predictions made by GNNs. Existing explanation methods mainly focus on post-hoc explanations where another explanatory model is employed to provide explanations for a trained GNN. The fact that post-hoc methods fail to reveal the original reasoning process of GNNs raises the need of building GNNs with built-in interpretability. In this work, we propose Prototype Graph Neural Network (ProtGNN), which combines prototype learning with GNNs and provides a new perspective on the explanations of GNNs. In ProtGNN, the explanations are naturally derived from the case-based reasoning process and are actually used during classification. The prediction of ProtGNN is obtained by comparing the inputs to a few learned prototypes in the latent space. Furthermore, for better interpretability and higher efficiency, a novel conditional subgraph sampling module is incorporated to indicate which part of the input graph is most similar to each prototype in ProtGNN+. Finally, we evaluate our method on a wide range of datasets and perform concrete case studies. Extensive results show that ProtGNN and ProtGNN+ can provide inherent interpretability while achieving accuracy on par with the non-interpretable counterparts.

Zaixi Zhang, Qi Liu, Hao Wang, Chengqiang Lu, Cheekong Lee• 2021

Related benchmarks

TaskDatasetResultRank
Graph ClassificationPROTEINS
Accuracy74.3
994
Graph ClassificationMUTAG
Accuracy80.5
862
Graph ClassificationNCI1
Accuracy74.13
501
Graph ClassificationCOLLAB
Accuracy69.3
422
Graph ClassificationIMDB-M
Accuracy36
275
Graph ClassificationDD
Accuracy69.15
273
Graph ClassificationPTC-MR
Accuracy68.2
197
Graph ClassificationDHFR
Accuracy70.8
140
Graph ClassificationBZR
Accuracy82.5
89
Graph ClassificationCOX2
Accuracy79.2
80
Showing 10 of 20 rows

Other info

Follow for update