Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

On Explainability of Graph Neural Networks via Subgraph Explorations

About

We consider the problem of explaining the predictions of graph neural networks (GNNs), which otherwise are considered as black boxes. Existing methods invariably focus on explaining the importance of graph nodes or edges but ignore the substructures of graphs, which are more intuitive and human-intelligible. In this work, we propose a novel method, known as SubgraphX, to explain GNNs by identifying important subgraphs. Given a trained GNN model and an input graph, our SubgraphX explains its predictions by efficiently exploring different subgraphs with Monte Carlo tree search. To make the tree search more effective, we propose to use Shapley values as a measure of subgraph importance, which can also capture the interactions among different subgraphs. To expedite computations, we propose efficient approximation schemes to compute Shapley values for graph data. Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly and directly. Experimental results show that our SubgraphX achieves significantly improved explanations, while keeping computations at a reasonable level.

Hao Yuan, Haiyang Yu, Jie Wang, Kang Li, Shuiwang Ji• 2021

Related benchmarks

TaskDatasetResultRank
GNN Explanationba_cycle_wheel
Fid+0.329
12
GNN ExplanationBA2Motifs
H-Fidelity60.5
6
GNN ExplanationBBBP
H-Fidelity56.1
6
GNN ExplanationBACE
H-Fidelity0.5519
6
GNN ExplanationGraphSST2
H-Fidelity54.87
6
GNN ExplanationMUTAG
H-Fidelity52.53
6
GNN ExplanationTwitter
H-Fidelity0.5494
6
GNN Explanationba_cycle_wheel_grid
Fid+0.562
6
Graph ExplanationBENZENE
FID+0.29
6
Graph ExplanationAlkane Carbonyl
Fid+0.188
6
Showing 10 of 14 rows

Other info

Follow for update