Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

The Surprising Power of Graph Neural Networks with Random Node Initialization

About

Graph neural networks (GNNs) are effective models for representation learning on relational data. However, standard GNNs are limited in their expressive power, as they cannot distinguish graphs beyond the capability of the Weisfeiler-Leman graph isomorphism heuristic. In order to break this expressiveness barrier, GNNs have been enhanced with random node initialization (RNI), where the idea is to train and run the models with randomized initial node features. In this work, we analyze the expressive power of GNNs with RNI, and prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties. This universality result holds even with partially randomized initial node features, and preserves the invariance properties of GNNs in expectation. We then empirically analyze the effect of RNI on GNNs, based on carefully constructed datasets. Our empirical findings support the superior performance of GNNs with RNI over standard GNNs.

Ralph Abboud, \.Ismail \.Ilkan Ceylan, Martin Grohe, Thomas Lukasiewicz• 2020

Related benchmarks

TaskDatasetResultRank
Graph ClassificationCSL (test)
Mean Accuracy16
45
Graph ClassificationEXP (test)
Accuracy99.7
33
Graph ClassificationSR25 (test)
Accuracy6.7
8
Graph ClassificationSR25
Accuracy6.67
8
Binary Graph ClassificationEXP (10-fold cross val)
Accuracy99.7
8
Graph ClassificationEXP synthetic (test)
Accuracy99.7
6
Graph ClassificationCSL synthetic (test)
Accuracy16
6
Showing 7 of 7 rows

Other info

Follow for update