Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Graph Propagation Transformer for Graph Representation Learning

About

This paper presents a novel transformer architecture for graph representation learning. The core insight of our method is to fully consider the information propagation among nodes and edges in a graph when building the attention module in the transformer blocks. Specifically, we propose a new attention mechanism called Graph Propagation Attention (GPA). It explicitly passes the information among nodes and edges in three ways, i.e. node-to-node, node-to-edge, and edge-to-node, which is essential for learning graph-structured data. On this basis, we design an effective transformer architecture named Graph Propagation Transformer (GPTrans) to further help learn graph data. We verify the performance of GPTrans in a wide range of graph learning experiments on several benchmark datasets. These results show that our method outperforms many state-of-the-art transformer-based graph models with better performance. The code will be released at https://github.com/czczup/GPTrans.

Zhe Chen, Hao Tan, Tao Wang, Tianrun Shen, Tong Lu, Qiuying Peng, Cheng Cheng, Yue Qi• 2023

Related benchmarks

TaskDatasetResultRank
Graph Classificationogbg-molpcba (test)
AP32.43
206
Graph RegressionOGB-LSC PCQM4M v2 (val)
MAE0.0809
81
Graph property regressionPCQM4M (val)
MAE0.1151
19
Graph-level classificationMolHIV (test)
AUC0.8126
19
Graph RegressionPCQM4M v2 (val)
MAE0.0809
13
Graph property regressionPCQM4M v2 (test-dev)
MAE0.0821
9
Graph RegressionOGB-LSC PCQM4M (test)--
9
Graph RegressionPCQM4M v2 (test)
MAE0.0821
8
Showing 8 of 8 rows

Other info

Code

Follow for update