Ordered Subgraph Aggregation Networks
About
Numerous subgraph-enhanced graph neural networks (GNNs) have emerged recently, provably boosting the expressive power of standard (message-passing) GNNs. However, there is a limited understanding of how these approaches relate to each other and to the Weisfeiler-Leman hierarchy. Moreover, current approaches either use all subgraphs of a given size, sample them uniformly at random, or use hand-crafted heuristics instead of learning to select subgraphs in a data-driven manner. Here, we offer a unified way to study such architectures by introducing a theoretical framework and extending the known expressivity results of subgraph-enhanced GNNs. Concretely, we show that increasing subgraph size always increases the expressive power and develop a better understanding of their limitations by relating them to the established $k\text{-}\mathsf{WL}$ hierarchy. In addition, we explore different approaches for learning to sample subgraphs using recent methods for backpropagating through complex discrete probability distributions. Empirically, we study the predictive performance of different subgraph-enhanced GNNs, showing that our data-driven architectures increase prediction accuracy on standard benchmark datasets compared to non-data-driven subgraph-enhanced graph neural networks while reducing computation time.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Graph property prediction | ZINC v1 (test) | MAE0.155 | 15 | |
| Molecular property prediction | PCQM4M V2 | MAE0.0862 | 10 | |
| Molecular property prediction | Molecule3D (scaffold) | MAE0.143 | 9 | |
| Molecular property prediction | Molecule3D (random) | MAE0.0372 | 9 | |
| Molecular property prediction | OLED 100–500 atoms (random) | S1 MAE0.55 | 7 | |
| Graph Expressivity Evaluation | BREC Basic | Count56 | 5 | |
| Graph Expressivity Evaluation | BREC Regular | Number8 | 5 |