Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DocSage: An Information Structuring Agent for Multi-Doc Multi-Entity Question Answering

About

Multi-document Multi-entity Question Answering inherently demands models to track implicit logic between multiple entities across scattered documents. However, existing Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) frameworks suffer from critical limitations: standard RAG's vector similarity-based coarse-grained retrieval often omits critical facts, graph-based RAG fails to efficiently integrate fragmented complex relationship networks, and both lack schema awareness, leading to inadequate cross-document evidence chain construction and inaccurate entity relationship deduction. To address these challenges, we propose DocSage, an end-to-end agentic framework that integrates dynamic schema discovery, structured information extraction, and schema-aware relational reasoning with error guarantees. DocSage operates through three core modules: (1) A schema discovery module dynamically infers query-specific minimal joinable schemas to capture essential entities and relationships; (2) An extraction module transforms unstructured text into semantically coherent relational tables, enhanced by error-aware correction mechanisms to reduce extraction errors; (3) A reasoning module performs multi-hop relational reasoning over structured tables, leveraging schema awareness to efficiently align cross-document entities and aggregate evidence. This agentic design offers three key advantages: precise fact localization via SQL-powered indexing, natural support for cross-document entity joins through relational tables, and mitigated LLM attention diffusion via structured representation. Evaluations on two MDMEQA benchmarks demonstrate that DocSage significantly outperforms state-of-the-art long-context LLMs and RAG systems, achieving more than 27% accuracy improvements respectively.

Teng Lin, Yizhang Zhu, Zhengxuan Zhang, Yuyu Luo, Nan Tang• 2026

Related benchmarks

TaskDatasetResultRank
Multi-document Multi-entity Question AnsweringLoong (All sets)
Spotlight Locating Avg Score85.06
5
Multi-document Multi-entity Question AnsweringLoong 10K-50K Tokens
Spotlight Locating Avg Score91.12
5
Multi-document Multi-entity Question AnsweringLoong (50K-100K Tokens)
Spotlight Locating Score (Avg)88.79
5
Multi-document Multi-entity Question AnsweringLoong 100K-200K Tokens
Spotlight Locating Avg Score81.44
5
Multi-document Multi-entity Question AnsweringLoong 200K-250K Tokens
Spotlight Locating Avg Score72.86
5
Multi-entity ReasoningMEBench All sets
Comparison Acc93.4
5
Multi-entity ReasoningMEBench Set1 (0-10)
Comparison Accuracy96.8
5
Multi-entity ReasoningMEBench Set2 (11-100)
Comparison Accuracy95.2
5
Multi-entity ReasoningMEBench Set3 (>100)
Comparison Accuracy94.6
5
Showing 9 of 9 rows

Other info

Follow for update