Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Mixture of Demonstrations for Textual Graph Understanding and Question Answering

About

Textual graph-based retrieval-augmented generation (GraphRAG) has emerged as a powerful paradigm for enhancing large language models (LLMs) in domain-specific question answering. While existing approaches primarily focus on zero-shot GraphRAG, selecting high-quality demonstrations is crucial for improving reasoning and answer accuracy. Furthermore, recent studies have shown that retrieved subgraphs often contain irrelevant information, which can degrade reasoning performance. In this paper, we propose MixDemo, a novel GraphRAG framework enhanced with a Mixture-of-Experts (MoE) mechanism for selecting the most informative demonstrations under diverse question contexts. To further reduce noise in the retrieved subgraphs, we introduce a query-specific graph encoder that selectively attends to information most relevant to the query. Extensive experiments across multiple textual graph benchmarks show that MixDemo significantly outperforms existing methods.

Yukun Wu, Lihui Liu• 2026

Related benchmarks

TaskDatasetResultRank
Knowledge Graph Question AnsweringWebQSP
Hit@171.36
143
Textual Graph ReasoningExplaGraphs
Accuracy87.31
9
Textual Graph ReasoningSceneGraphs
Accuracy82.32
9
Showing 3 of 3 rows

Other info

Follow for update