Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

TaSR-RAG: Taxonomy-guided Structured Reasoning for Retrieval-Augmented Generation

About

Retrieval-Augmented Generation (RAG) helps large language models (LLMs) answer knowledge-intensive and time-sensitive questions by conditioning generation on external evidence. However, most RAG systems still retrieve unstructured chunks and rely on one-shot generation, which often yields redundant context, low information density, and brittle multi-hop reasoning. While structured RAG pipelines can improve grounding, they typically require costly and error-prone graph construction or impose rigid entity-centric structures that do not align with the query's reasoning chain. We propose \textsc{TaSR-RAG}, a taxonomy-guided structured reasoning framework for evidence selection. We represent both queries and documents as relational triples, and constrain entity semantics with a lightweight two-level taxonomy to balance generalization and precision. Given a complex question, \textsc{TaSR-RAG} decomposes it into an ordered sequence of triple sub-queries with explicit latent variables, then performs step-wise evidence selection via hybrid triple matching that combines semantic similarity over raw triples with structural consistency over typed triples. By maintaining an explicit entity binding table across steps, \textsc{TaSR-RAG} resolves intermediate variables and reduces entity conflation without explicit graph construction or exhaustive search. Experiments on multiple multi-hop question answering benchmarks show that \textsc{TaSR-RAG} consistently outperforms strong RAG and structured-RAG baselines by up to 14\%, while producing clearer evidence attribution and more faithful reasoning traces.

Jiashuo Sun, Yixuan Xie, Jimeng Shi, Shaowen Wang, Jiawei Han• 2026

Related benchmarks

TaskDatasetResultRank
Multi-hop Question AnsweringMuSiQue
EM18.3
185
Multi-hop QAHotpotQA
Exact Match38.7
76
General QANQ
EM40.6
38
General QAPopQA
Exact Match (EM)37.7
28
Multi-hop QABamboogle
EM45.6
27
Multi-hop QA2WikiMultihopQA
F1 Score66.2
23
General QATriviaQA
EM64.5
18
Showing 7 of 7 rows

Other info

Follow for update