Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Exploring Concept Subspace for Self-explainable Text-Attributed Graph Learning

About

We introduce Graph Concept Bottleneck (GCB) as a new paradigm for self-explainable text-attributed graph learning. GCB maps graphs into a subspace, concept bottleneck, where each concept is a meaningful phrase, and predictions are made based on the activation of these concepts. Unlike existing interpretable graph learning methods that primarily rely on subgraphs as explanations, the concept bottleneck provides a new form of interpretation. To refine the concept space, we apply the information bottleneck principle to focus on the most relevant concepts. This not only yields more concise and faithful explanations but also explicitly guides the model to "think" toward the correct decision. We empirically show that GCB achieves intrinsic interpretability with accuracy on par with black-box Graph Neural Networks. Moreover, it delivers better performance under distribution shifts and data perturbations, showing improved robustness and generalizability, benefitting from concept-guided prediction.

Xiaoxue Han, Libo Zhang, Zining Zhu, Yue Ning• 2026

Related benchmarks

TaskDatasetResultRank
Node ClassificationCora (test)--
861
Node ClassificationREDDIT
Accuracy55.12
192
Node ClassificationReddit (test)--
137
Node ClassificationREDDIT
F1 Score55.06
49
Node ClassificationCiteseer
F1 Score63.39
40
Node ClassificationwikiCS
F1 Score69.17
40
Node ClassificationCora
F1 Score70.75
40
Node ClassificationInstagram--
34
Node ClassificationCiteseer OOD (test)
F1 Score60.19
30
Node ClassificationInstagram OOD (test)
F1 Score56.8
30
Showing 10 of 24 rows

Other info

Follow for update