Implicit Graph Neural Networks
About
Graph Neural Networks (GNNs) are widely used deep learning models that learn meaningful representations from graph-structured data. Due to the finite nature of the underlying recurrent structure, current GNN methods may struggle to capture long-range dependencies in underlying graphs. To overcome this difficulty, we propose a graph learning framework, called Implicit Graph Neural Networks (IGNN), where predictions are based on the solution of a fixed-point equilibrium equation involving implicitly defined "state" vectors. We use the Perron-Frobenius theory to derive sufficient conditions that ensure well-posedness of the framework. Leveraging implicit differentiation, we derive a tractable projected gradient descent method to train the framework. Experiments on a comprehensive range of tasks show that IGNNs consistently capture long-range dependencies and outperform the state-of-the-art GNN models.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Graph Classification | PROTEINS | Accuracy77.7 | 742 | |
| Node Classification | Chameleon | Accuracy41.38 | 549 | |
| Node Classification | Squirrel | Accuracy24.99 | 500 | |
| Graph Classification | NCI1 | Accuracy80.5 | 460 | |
| Node Classification | Cornell | Accuracy61.35 | 426 | |
| Node Classification | Texas | Accuracy0.5837 | 410 | |
| Node Classification | Wisconsin | Accuracy53.53 | 410 | |
| Node Classification | Texas (test) | Mean Accuracy57.84 | 228 | |
| Graph Classification | MUTAG (10-fold cross-validation) | Accuracy89.3 | 206 | |
| Node Classification | Wisconsin (test) | Mean Accuracy53.53 | 198 |