Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Graph Quantized Tokenizers

About

Transformers serve as the backbone architectures of Foundational Models, where domain-specific tokenizers allow them to adapt to various domains. Graph Transformers (GTs) have recently emerged as leading models in geometric deep learning, outperforming Graph Neural Networks (GNNs) in various graph learning tasks. However, the development of tokenizers for graphs has lagged behind other modalities. To address this, we introduce GQT (\textbf{G}raph \textbf{Q}uantized \textbf{T}okenizer), which decouples tokenizer training from Transformer training by leveraging multi-task graph self-supervised learning, yielding robust and generalizable graph tokens. Furthermore, the GQT utilizes Residual Vector Quantization (RVQ) to learn hierarchical discrete tokens, resulting in significantly reduced memory requirements and improved generalization capabilities. By combining the GQT with token modulation, a Transformer encoder achieves state-of-the-art performance on 20 out of 22 benchmarks, including large-scale homophilic and heterophilic datasets.

Limei Wang, Kaveh Hassani, Si Zhang, Dongqi Fu, Baichuan Yuan, Weilin Cong, Zhigang Hua, Hao Wu, Ning Yao, Bo Long• 2024

Related benchmarks

TaskDatasetResultRank
Node ClassificationPubmed
Accuracy85.79
307
Node ClassificationwikiCS
Accuracy76.61
198
Node ClassificationOgbn-arxiv
Accuracy71.69
191
Graph ClassificationHIV
ROC-AUC0.6832
104
Edge classificationFB15K237
Accuracy72.81
17
Edge classificationWN18RR
Accuracy83.73
17
Graph ClassificationPCBA
AUC-ROC71.93
17
Showing 7 of 7 rows

Other info

Follow for update