Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Revisiting Over-smoothing in Deep GCNs

About

Oversmoothing has been assumed to be the major cause of performance drop in deep graph convolutional networks (GCNs). In this paper, we propose a new view that deep GCNs can actually learn to anti-oversmooth during training. This work interprets a standard GCN architecture as layerwise integration of a Multi-layer Perceptron (MLP) and graph regularization. We analyze and conclude that before training, the final representation of a deep GCN does over-smooth, however, it learns anti-oversmoothing during training. Based on the conclusion, the paper further designs a cheap but effective trick to improve GCN training. We verify our conclusions and evaluate the trick on three citation networks and further provide insights on neighborhood aggregation in GCNs.

Chaoqi Yang, Ruijie Wang, Shuochao Yao, Shengzhong Liu, Tarek Abdelzaher• 2020

Related benchmarks

TaskDatasetResultRank
Graph RegressionZINC 12K (test)
MAE0.1632
164
Graph ClassificationMolHIV
ROC AUC76.37
82
Graph ClassificationMOLTOX21
ROC-AUC0.7498
38
Molecular property predictionMOLESOL
RMSE1.062
37
Graph ClassificationMOLBACE
ROC AUC0.7636
31
Showing 5 of 5 rows

Other info

Follow for update