Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Do We Really Need Complicated Model Architectures For Temporal Networks?

About

Recurrent neural network (RNN) and self-attention mechanism (SAM) are the de facto methods to extract spatial-temporal information for temporal graph learning. Interestingly, we found that although both RNN and SAM could lead to a good performance, in practice neither of them is always necessary. In this paper, we propose GraphMixer, a conceptually and technically simple architecture that consists of three components: (1) a link-encoder that is only based on multi-layer perceptrons (MLP) to summarize the information from temporal links, (2) a node-encoder that is only based on neighbor mean-pooling to summarize node information, and (3) an MLP-based link classifier that performs link prediction based on the outputs of the encoders. Despite its simplicity, GraphMixer attains an outstanding performance on temporal link prediction benchmarks with faster convergence and better generalization performance. These results motivate us to rethink the importance of simpler model architecture.

Weilin Cong, Si Zhang, Jian Kang, Baichuan Yuan, Hao Wu, Xin Zhou, Hanghang Tong, Mehrdad Mahdavi• 2023

Related benchmarks

TaskDatasetResultRank
Link PredictionReddit (inductive)
AP95.26
52
Dynamic Link DetectionENRON
AP82.25
44
Dynamic Graph Anomaly DetectionMOOC S2
AUROC68.71
42
Dynamic Graph Anomaly DetectionWikipedia S2
AUROC75.29
42
Dynamic new link predictionSocial Evo.
AP0.9493
37
Link PredictionEnron (inductive)
AP75.88
37
Link PredictionReddit (transductive)
AP97.5
30
Link PredictionLastFM (transductive)
AP86.2
28
Link PredictionEnron (transductive)
AP82.4
28
Dynamic Link PredictionUCI
AP93.25
27
Showing 10 of 68 rows

Other info

Follow for update