Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Do We Really Need Complicated Model Architectures For Temporal Networks?

About

Recurrent neural network (RNN) and self-attention mechanism (SAM) are the de facto methods to extract spatial-temporal information for temporal graph learning. Interestingly, we found that although both RNN and SAM could lead to a good performance, in practice neither of them is always necessary. In this paper, we propose GraphMixer, a conceptually and technically simple architecture that consists of three components: (1) a link-encoder that is only based on multi-layer perceptrons (MLP) to summarize the information from temporal links, (2) a node-encoder that is only based on neighbor mean-pooling to summarize node information, and (3) an MLP-based link classifier that performs link prediction based on the outputs of the encoders. Despite its simplicity, GraphMixer attains an outstanding performance on temporal link prediction benchmarks with faster convergence and better generalization performance. These results motivate us to rethink the importance of simpler model architecture.

Weilin Cong, Si Zhang, Jian Kang, Baichuan Yuan, Hao Wu, Xin Zhou, Hanghang Tong, Mehrdad Mahdavi• 2023

Related benchmarks

TaskDatasetResultRank
Node ClassificationREDDIT--
192
Link PredictionReddit (inductive)
AP95.26
81
Link PredictionEnron (inductive)
AP75.88
66
Inductive dynamic link predictionReddit (inductive)
AUC-ROC (%)94.97
65
Link PredictionEnron (transductive)
AP82.4
49
Dynamic Link PredictionCan. Parl. Inductive
AP77.04
48
Dynamic Link PredictionWikipedia (inductive)
AP96.65
44
Inductive dynamic link predictionWikipedia (inductive)
AUC-ROC0.963
44
Dynamic Link DetectionENRON
AP82.25
44
Dynamic Graph Anomaly DetectionMOOC S2
AUROC68.71
42
Showing 10 of 185 rows
...

Other info

Follow for update