Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking

About

Graph neural networks (GNNs) have become a popular approach to integrating structural inductive biases into NLP models. However, there has been little work on interpreting them, and specifically on understanding which parts of the graphs (e.g. syntactic trees or co-reference structures) contribute to a prediction. In this work, we introduce a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges. Given a trained GNN model, we learn a simple classifier that, for every edge in every layer, predicts if that edge can be dropped. We demonstrate that such a classifier can be trained in a fully differentiable fashion, employing stochastic gates and encouraging sparsity through the expected $L_0$ norm. We use our technique as an attribution method to analyze GNN models for two tasks -- question answering and semantic role labeling -- providing insights into the information flow in these models. We show that we can drop a large proportion of edges without deteriorating the performance of the model, while we can analyse the remaining edges for interpreting model predictions.

Michael Sejr Schlichtkrull, Nicola De Cao, Ivan Titov• 2020

Related benchmarks

TaskDatasetResultRank
Graph ExplanationZINC250K HLM-CLint (test)
Fidelity+0.706
13
Graph InterpretationBA-2MOTIFS
AUC0.9254
9
Graph InterpretationMNIST 75SP
AUC73.1
9
Graph InterpretationSPURIOUS-MOTIF b=0.5
AUC72.06
9
Graph InterpretationSPURIOUS-MOTIF b=0.7
AUC0.7306
9
Graph InterpretationSPURIOUS-MOTIF b=0.9
AUC0.6668
9
Graph InterpretationMUTAG
AUC62.23
9
Graph ExplanationZINC250K QED (test)
Fidelity+0.602
7
Graph ExplanationZINC250K DRD2 (test)
Fidelity+0.673
7
Graph ExplanationZINC250K RLM-CLint (test)
Fidelity+0.632
7
Showing 10 of 10 rows

Other info

Follow for update