Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Scaling Up Graph Neural Networks Via Graph Coarsening

About

Scalability of graph neural networks remains one of the major challenges in graph machine learning. Since the representation of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes from previous layers, the receptive fields grow exponentially, which makes standard stochastic optimization techniques ineffective. Various approaches have been proposed to alleviate this issue, e.g., sampling-based methods and techniques based on pre-computation of graph filters. In this paper, we take a different approach and propose to use graph coarsening for scalable training of GNNs, which is generic, extremely simple and has sublinear memory and time costs during training. We present extensive theoretical analysis on the effect of using coarsening operations and provides useful guidance on the choice of coarsening methods. Interestingly, our theoretical analysis shows that coarsening can also be considered as a type of regularization and may improve the generalization. Finally, empirical results on real world datasets show that, simply applying off-the-shelf coarsening methods, we can reduce the number of nodes by up to a factor of ten without causing a noticeable downgrade in classification accuracy.

Zengfeng Huang, Shengzhong Zhang, Chong Xi, Tang Liu, Min Zhou• 2021

Related benchmarks

TaskDatasetResultRank
Node ClassificationCora
Accuracy80.8
1215
Node ClassificationCiteseer
Accuracy71.6
931
Node ClassificationCora (test)
Mean Accuracy70.6
861
Node ClassificationCiteseer (test)
Accuracy0.653
824
Node ClassificationPubmed
Accuracy79.3
819
Node Classificationogbn-arxiv (test)
Accuracy68.92
433
Node ClassificationREDDIT
Accuracy66.3
192
Node ClassificationPhysics
Accuracy93.1
145
Node ClassificationReddit (test)
Accuracy47.4
137
Node ClassificationDBLP (test)--
70
Showing 10 of 29 rows

Other info

Follow for update