Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-supervised Heterogeneous Graph Pre-training Based on Structural Clustering

About

Recent self-supervised pre-training methods on Heterogeneous Information Networks (HINs) have shown promising competitiveness over traditional semi-supervised Heterogeneous Graph Neural Networks (HGNNs). Unfortunately, their performance heavily depends on careful customization of various strategies for generating high-quality positive examples and negative examples, which notably limits their flexibility and generalization ability. In this work, we present SHGP, a novel Self-supervised Heterogeneous Graph Pre-training approach, which does not need to generate any positive examples or negative examples. It consists of two modules that share the same attention-aggregation scheme. In each iteration, the Att-LPA module produces pseudo-labels through structural clustering, which serve as the self-supervision signals to guide the Att-HGNN module to learn object embeddings and attention coefficients. The two modules can effectively utilize and enhance each other, promoting the model to learn discriminative embeddings. Extensive experiments on four real-world datasets demonstrate the superior effectiveness of SHGP against state-of-the-art unsupervised baselines and even semi-supervised baselines. We release our source code at: https://github.com/kepsail/SHGP.

Yaming Yang, Ziyu Guan, Zhe Wang, Wei Zhao, Cai Xu, Weigang Lu, Jianbin Huang• 2022

Related benchmarks

TaskDatasetResultRank
Node ClassificationIMDB
Macro F1 Score0.4802
179
Node ClusteringACM
ARI32.63
57
Node ClusteringDBLP
NMI0.733
39
ClusteringIMDB--
34
Object ClassificationMAG
Mic-F198.37
24
Object ClassificationDBLP
Micro F194.13
24
Object ClassificationACM
Micro F1 Score80.91
24
Object clusteringMAG
NMI90.65
6
Showing 8 of 8 rows

Other info

Code

Follow for update