Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering

About

Given a graph with textual attributes, we enable users to `chat with their graph': that is, to ask questions about the graph using a conversational interface. In response to a user's questions, our method provides textual replies and highlights the relevant parts of the graph. While existing works integrate large language models (LLMs) and graph neural networks (GNNs) in various ways, they mostly focus on either conventional graph tasks (such as node, edge, and graph classification), or on answering simple graph queries on small or synthetic graphs. In contrast, we develop a flexible question-answering framework targeting real-world textual graphs, applicable to multiple applications including scene graph understanding, common sense reasoning, and knowledge graph reasoning. Toward this goal, we first develop a Graph Question Answering (GraphQA) benchmark with data collected from different tasks. Then, we propose our G-Retriever method, introducing the first retrieval-augmented generation (RAG) approach for general textual graphs, which can be fine-tuned to enhance graph understanding via soft prompting. To resist hallucination and to allow for textual graphs that greatly exceed the LLM's context window size, G-Retriever performs RAG over a graph by formulating this task as a Prize-Collecting Steiner Tree optimization problem. Empirical evaluations show that our method outperforms baselines on textual graph tasks from multiple domains, scales well with larger graph sizes, and mitigates hallucination.~\footnote{Our codes and datasets are available at: \url{https://github.com/XiaoxinHe/G-Retriever}}

Xiaoxin He, Yijun Tian, Yifei Sun, Nitesh V. Chawla, Thomas Laurent, Yann LeCun, Xavier Bresson, Bryan Hooi• 2024

Related benchmarks

TaskDatasetResultRank
Multi-hop Question AnsweringHotpotQA (test)
F149.3
198
Multi-hop Question Answering2WikiMultiHopQA (test)
EM36.2
143
Question Answering2WikiMultiHopQA (test)--
69
Question AnsweringNQ (test)--
66
Question AnsweringMetaQA 3-hop
Hits@154.9
38
RecommendationMovieLens 1M (test)
Recall@30.632
34
Knowledge Graph Question AnsweringWEBQSP (test)
Hit73.79
30
Knowledge Base Question AnsweringMetaQA 1hop
Hits@198.5
28
RecommendationMovieLens 20M (test)
Accuracy50.2
24
Knowledge Graph Question AnsweringMetaQA 2-hop (test)
Hits@187.6
20
Showing 10 of 29 rows

Other info

Code

Follow for update