Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Global Counterfactual Explainer for Graph Neural Networks

About

Graph neural networks (GNNs) find applications in various domains such as computational biology, natural language processing, and computer security. Owing to their popularity, there is an increasing need to explain GNN predictions since GNNs are black-box machine learning models. One way to address this is counterfactual reasoning where the objective is to change the GNN prediction by minimal changes in the input graph. Existing methods for counterfactual explanation of GNNs are limited to instance-specific local reasoning. This approach has two major limitations of not being able to offer global recourse policies and overloading human cognitive ability with too much information. In this work, we study the global explainability of GNNs through global counterfactual reasoning. Specifically, we want to find a small set of representative counterfactual graphs that explains all input graphs. Towards this goal, we propose GCFExplainer, a novel algorithm powered by vertex-reinforced random walks on an edit map of graphs with a greedy summary. Extensive experiments on real graph datasets show that the global explanation from GCFExplainer provides important high-level insights of the model behavior and achieves a 46.9% gain in recourse coverage and a 9.5% reduction in recourse cost compared to the state-of-the-art local counterfactual explainers.

Mert Kosan, Zexi Huang, Sourav Medya, Sayan Ranu, Ambuj Singh• 2022

Related benchmarks

TaskDatasetResultRank
Model-level counterfactual explanationP5Motif
Validity1.08
3
Model-level counterfactual explanationMutagenicity
Validity1.12
3
Model-level counterfactual explanationAIDS
Validity1.05
3
Model-level counterfactual explanationBBBP
Validity1.1
3
Showing 4 of 4 rows

Other info

Follow for update