Graph Retention Networks for Dynamic Graphs
About
In this paper, we propose Graph Retention Networks (GRNs) as a unified architecture for deep learning on dynamic graphs. The GRN extends the concept of retention into dynamic graph data as graph retention, equipping the model with three key computational paradigms: parallelizable training, low-cost $\mathcal{O}(1)$ inference, and long-term chunkwise training. This architecture achieves an optimal balance between efficiency, effectiveness, and scalability. Extensive experiments on benchmark datasets demonstrate its strong performance in both edge-level prediction and node-level classification tasks with significantly reduced training latency, lower GPU memory overhead, and improved inference throughput by up to 86.7x compared to SOTA baselines. The proposed GRN architecture achieves competitive performance across diverse dynamic graph benchmarks, demonstrating its adaptability to a wide range of tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Node Classification | -- | 192 | ||
| Link Prediction | Reddit (inductive) | AP98.57 | 81 | |
| Link Prediction | Enron (inductive) | AP81.39 | 66 | |
| Inductive dynamic link prediction | Reddit (inductive) | AUC-ROC (%)98.9 | 65 | |
| Link Prediction | Enron (transductive) | -- | 49 | |
| Dynamic Link Prediction | UN Vote | AP83.09 | 37 | |
| Dynamic Link Prediction | UN Trade | AP71.54 | 37 | |
| Dynamic Link Prediction | US Legislature | AP79.21 | 37 | |
| Dynamic Link Prediction | Contact | AP89.34 | 37 | |
| Link Prediction | LastFM (inductive) | AP83.44 | 35 |