Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DropGNN: Random Dropouts Increase the Expressiveness of Graph Neural Networks

About

This paper studies Dropout Graph Neural Networks (DropGNNs), a new approach that aims to overcome the limitations of standard GNN frameworks. In DropGNNs, we execute multiple runs of a GNN on the input graph, with some of the nodes randomly and independently dropped in each of these runs. Then, we combine the results of these runs to obtain the final result. We prove that DropGNNs can distinguish various graph neighborhoods that cannot be separated by message passing GNNs. We derive theoretical bounds for the number of runs required to ensure a reliable distribution of dropouts, and we prove several properties regarding the expressive capabilities and limits of DropGNNs. We experimentally validate our theoretical findings on expressiveness. Furthermore, we show that DropGNNs perform competitively on established GNN benchmarks.

P\'al Andr\'as Papp, Karolis Martinkus, Lukas Faber, Roger Wattenhofer• 2021

Related benchmarks

TaskDatasetResultRank
Graph ClassificationPROTEINS
Accuracy76.3
742
Graph ClassificationMUTAG
Accuracy90.4
697
Graph ClassificationNCI1
Accuracy81.6
460
Graph ClassificationNCI109
Accuracy80.8
223
Graph ClassificationMutag (test)
Accuracy85.79
217
Graph ClassificationMUTAG (10-fold cross-validation)
Accuracy90.4
206
Graph ClassificationPROTEINS (10-fold cross-validation)
Accuracy76.9
197
Graph ClassificationPROTEINS (test)
Accuracy55.14
180
Graph ClassificationPTC-MR
Accuracy66.3
153
Graph ClassificationIMDB-B (10-fold cross-validation)
Accuracy75.7
148
Showing 10 of 24 rows

Other info

Code

Follow for update