Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

GAugLLM: Improving Graph Contrastive Learning for Text-Attributed Graphs with Large Language Models

About

This work studies self-supervised graph learning for text-attributed graphs (TAGs) where nodes are represented by textual attributes. Unlike traditional graph contrastive methods that perturb the numerical feature space and alter the graph's topological structure, we aim to improve view generation through language supervision. This is driven by the prevalence of textual attributes in real applications, which complement graph structures with rich semantic information. However, this presents challenges because of two major reasons. First, text attributes often vary in length and quality, making it difficulty to perturb raw text descriptions without altering their original semantic meanings. Second, although text attributes complement graph structures, they are not inherently well-aligned. To bridge the gap, we introduce GAugLLM, a novel framework for augmenting TAGs. It leverages advanced large language models like Mistral to enhance self-supervised graph learning. Specifically, we introduce a mixture-of-prompt-expert technique to generate augmented node features. This approach adaptively maps multiple prompt experts, each of which modifies raw text attributes using prompt engineering, into numerical feature space. Additionally, we devise a collaborative edge modifier to leverage structural and textual commonalities, enhancing edge augmentation by examining or building connections between nodes. Empirical results across five benchmark datasets spanning various domains underscore our framework's ability to enhance the performance of leading contrastive methods as a plug-in tool. Notably, we observe that the augmented features and graph structure can also enhance the performance of standard generative methods, as well as popular graph neural networks. The open-sourced implementation of our GAugLLM is available at Github.

Yi Fang, Dongzhe Fan, Daochen Zha, Qiaoyu Tan• 2024

Related benchmarks

TaskDatasetResultRank
Node ClassificationCiteseer
Accuracy70.19
931
Node ClassificationPubmed
Accuracy80.59
819
Node ClassificationwikiCS
Accuracy80.32
317
Node ClassificationarXiv
Accuracy73.59
219
Node ClassificationPhoto
Accuracy76.39
139
Node ClassificationComputer
Accuracy87.79
89
Node ClassificationBooks
Accuracy82.12
15
Showing 7 of 7 rows

Other info

Follow for update