Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Is Your Explanation Reliable: Confidence-Aware Explanation on Graph Neural Networks

About

Explaining Graph Neural Networks (GNNs) has garnered significant attention due to the need for interpretability, enabling users to understand the behavior of these black-box models better and extract valuable insights from their predictions. While numerous post-hoc instance-level explanation methods have been proposed to interpret GNN predictions, the reliability of these explanations remains uncertain, particularly in the out-of-distribution or unknown test datasets. In this paper, we address this challenge by introducing an explainer framework with the confidence scoring module ( ConfExplainer), grounded in theoretical principle, which is generalized graph information bottleneck with confidence constraint (GIB-CC), that quantifies the reliability of generated explanations. Experimental results demonstrate the superiority of our approach, highlighting the effectiveness of the confidence score in enhancing the trustworthiness and robustness of GNN explanations.

Jiaxing Zhang, Xiaoou Liu, Dongsheng Luo, Hua Wei• 2025

Related benchmarks

TaskDatasetResultRank
Graph ClassificationPROTEINS
Accuracy72.5
994
Graph ClassificationMUTAG
Accuracy82.5
862
Graph ClassificationCOLLAB
Accuracy77.5
422
Graph ClassificationIMDB-M
Accuracy44.9
275
Graph ClassificationPTC-MR
Accuracy66.8
197
Graph ClassificationDHFR
Accuracy71.8
140
Graph ClassificationBZR
Accuracy83.1
89
Graph ClassificationCOX2
Accuracy75.5
80
Graph ClassificationIMDB-B
Mean Accuracy65.6
39
Graph ClassificationDBLP v1
Accuracy82.7
25
Showing 10 of 14 rows

Other info

Follow for update