Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Do Transformers Really Perform Bad for Graph Representation?

About

The Transformer architecture has become a dominant choice in many domains, such as natural language processing and computer vision. Yet, it has not achieved competitive performance on popular leaderboards of graph-level prediction compared to mainstream GNN variants. Therefore, it remains a mystery how Transformers could perform well for graph representation learning. In this paper, we solve this mystery by presenting Graphormer, which is built upon the standard Transformer architecture, and could attain excellent results on a broad range of graph representation learning tasks, especially on the recent OGB Large-Scale Challenge. Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model. To this end, we propose several simple yet effective structural encoding methods to help Graphormer better model graph-structured data. Besides, we mathematically characterize the expressive power of Graphormer and exhibit that with our ways of encoding the structural information of graphs, many popular GNN variants could be covered as the special cases of Graphormer.

Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu• 2021

Related benchmarks

TaskDatasetResultRank
Node ClassificationCora
Accuracy86.4
885
Node ClassificationCiteseer
Accuracy74.6
804
Graph ClassificationPROTEINS
Accuracy68.4
742
Graph ClassificationMUTAG
Accuracy74.4
697
Node ClassificationChameleon
Accuracy53.8
549
Node ClassificationSquirrel
Accuracy34.6
500
Graph ClassificationNCI1
Accuracy77
460
Node ClassificationCornell
Accuracy68.3
426
Node ClassificationWisconsin
Accuracy84
410
Node ClassificationTexas
Accuracy0.767
410
Showing 10 of 143 rows
...

Other info

Code

Follow for update