Edge Contraction Pooling for Graph Neural Networks
About
Graph Neural Network (GNN) research has concentrated on improving convolutional layers, with little attention paid to developing graph pooling layers. Yet pooling layers can enable GNNs to reason over abstracted groups of nodes instead of single nodes. To close this gap, we propose a graph pooling layer relying on the notion of edge contraction: EdgePool learns a localized and sparse hard pooling transform. We show that EdgePool outperforms alternative pooling methods, can be easily integrated into most GNN models, and improves performance on both node and graph classification.
Frederik Diehl• 2019
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Graph Classification | NCI1 | Accuracy76 | 460 | |
| Graph Classification | Mutag (test) | Accuracy81.41 | 217 | |
| Graph Classification | MUTAG (10-fold cross-validation) | Accuracy74.17 | 206 | |
| Graph Classification | PROTEINS (10-fold cross-validation) | Accuracy75.12 | 197 | |
| Graph Classification | PROTEINS (test) | Accuracy82.38 | 180 | |
| Graph Classification | NCI1 (test) | Accuracy76.56 | 174 | |
| Graph Classification | MolHIV | ROC AUC75 | 82 | |
| Graph Classification | ENZYMES (test) | Accuracy65.33 | 77 | |
| Graph Classification | REDDIT-B | Accuracy90 | 71 | |
| Graph Classification | NCI109 (test) | Accuracy79.02 | 64 |
Showing 10 of 25 rows