Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

StructRAG: Boosting Knowledge Intensive Reasoning of LLMs via Inference-time Hybrid Information Structurization

About

Retrieval-augmented generation (RAG) is a key means to effectively enhance large language models (LLMs) in many knowledge-based tasks. However, existing RAG methods struggle with knowledge-intensive reasoning tasks, because useful information required to these tasks are badly scattered. This characteristic makes it difficult for existing RAG methods to accurately identify key information and perform global reasoning with such noisy augmentation. In this paper, motivated by the cognitive theories that humans convert raw information into various structured knowledge when tackling knowledge-intensive reasoning, we proposes a new framework, StructRAG, which can identify the optimal structure type for the task at hand, reconstruct original documents into this structured format, and infer answers based on the resulting structure. Extensive experiments across various knowledge-intensive tasks show that StructRAG achieves state-of-the-art performance, particularly excelling in challenging scenarios, demonstrating its potential as an effective solution for enhancing LLMs in complex real-world applications.

Zhuoqun Li, Xuanang Chen, Haiyang Yu, Hongyu Lin, Yaojie Lu, Qiaoyu Tang, Fei Huang, Xianpei Han, Le Sun, Yongbin Li• 2024

Related benchmarks

TaskDatasetResultRank
Multi-hop Question AnsweringHotpotQA
F1 Score42.25
294
Multi-hop Question AnsweringMuSiQue
EM8.8
185
Multi-hop QAHotpotQA
Exact Match27.5
76
Question Answering2WikiMQA--
44
General QANQ
EM27.9
38
Biomedical Multi-hop Question AnsweringCondMedQA
EM65.71
36
General QAPopQA
Exact Match (EM)36.5
28
Multi-hop QABamboogle
EM44.8
27
Multi-hop QA2WikiMultihopQA
F1 Score37.2
23
Question AnsweringNQ
Cover EM0.574
18
Showing 10 of 51 rows

Other info

Follow for update