Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Adaptive Multi-view Graph Contrastive Learning via Fractional-order Neural Diffusion Networks

About

Graph contrastive learning (GCL) learns node and graph representations by contrasting multiple views of the same graph. Existing methods typically rely on fixed, handcrafted views-usually a local and a global perspective, which limits their ability to capture multi-scale structural patterns. We present an augmentation-free, multi-view GCL framework grounded in fractional-order continuous dynamics. By varying the fractional derivative order $\alpha \in (0,1]$, our encoders produce a continuous spectrum of views: small $\alpha$ yields localized features, while large $\alpha$ induces broader, global aggregation. We treat $\alpha$ as a learnable parameter so the model can adapt diffusion scales to the data and automatically discover informative views. This principled approach generates diverse, complementary representations without manual augmentations. Extensive experiments on standard benchmarks demonstrate that our method produces more robust and expressive embeddings and outperforms state-of-the-art GCL baselines.

Yanan Zhao, Feng Ji, Jingyang Dai, Jiaze Ma, Keyue Jiang, Kai Zhao, Wee Peng Tay• 2025

Related benchmarks

TaskDatasetResultRank
Graph ClassificationPROTEINS
Accuracy75.42
994
Node ClassificationCora (test)
Mean Accuracy84.4
861
Node ClassificationCiteseer (test)
Accuracy0.7332
824
Node ClassificationChameleon
Accuracy45.57
640
Node ClassificationWisconsin
Accuracy78.04
627
Node ClassificationTexas
Accuracy0.8019
616
Node ClassificationSquirrel
Accuracy41.47
591
Node ClassificationCornell
Accuracy72.7
582
Node ClassificationActor
Accuracy35.91
397
Node ClassificationPhoto
Mean Accuracy94.27
343
Showing 10 of 20 rows

Other info

Follow for update