Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

The GECo algorithm for Graph Neural Networks Explanation

About

Graph Neural Networks (GNNs) are powerful models that can manage complex data sources and their interconnection links. One of GNNs' main drawbacks is their lack of interpretability, which limits their application in sensitive fields. In this paper, we introduce a new methodology involving graph communities to address the interpretability of graph classification problems. The proposed method, called GECo, exploits the idea that if a community is a subset of graph nodes densely connected, this property should play a role in graph classification. This is reasonable, especially if we consider the message-passing mechanism, which is the basic mechanism of GNNs. GECo analyzes the contribution to the classification result of the communities in the graph, building a mask that highlights graph-relevant structures. GECo is tested for Graph Convolutional Networks on six artificial and four real-world graph datasets and is compared to the main explainability methods such as PGMExplainer, PGExplainer, GNNExplainer, and SubgraphX using four different metrics. The obtained results outperform the other methods for artificial graph datasets and most real-world datasets.

Salvatore Calderaro, Domenico Amato, Giosu\`e Lo Bosco, Riccardo Rizzo, Filippo Vella• 2024

Related benchmarks

TaskDatasetResultRank
GNN Explanationba_cycle_wheel
Fid+0.866
12
GNN Explanationba_house_cycle
Fid+0.929
6
GNN Explanationer_house_cycle
Fid+0.791
6
GNN Explanationba_cycle_wheel_grid
Fid+0.887
6
GNN Explanationer_cycle_wheel_grid
Fid+0.915
6
Graph ExplanationFluoride Carbonyl
Fidelity+0.615
6
Graph ExplanationBENZENE
FID+0.71
6
Graph ExplanationAlkane Carbonyl
Fid+0.575
6
Showing 8 of 8 rows

Other info

Follow for update