Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ComoRAG: A Cognitive-Inspired Memory-Organized RAG for Stateful Long Narrative Reasoning

About

Narrative comprehension on long stories and novels has been a challenging domain attributed to their intricate plotlines and entangled, often evolving relations among characters and entities. Given the LLM's diminished reasoning over extended context and its high computational cost, retrieval-based approaches remain a pivotal role in practice. However, traditional RAG methods could fall short due to their stateless, single-step retrieval process, which often overlooks the dynamic nature of capturing interconnected relations within long-range context. In this work, we propose ComoRAG, holding the principle that narrative reasoning is not a one-shot process, but a dynamic, evolving interplay between new evidence acquisition and past knowledge consolidation, analogous to human cognition on reasoning with memory-related signals in the brain. Specifically, when encountering a reasoning impasse, ComoRAG undergoes iterative reasoning cycles while interacting with a dynamic memory workspace. In each cycle, it generates probing queries to devise new exploratory paths, then integrates the retrieved evidence of new aspects into a global memory pool, thereby supporting the emergence of a coherent context for the query resolution. Across four challenging long-context narrative benchmarks (200K+ tokens), ComoRAG outperforms strong RAG baselines with consistent relative gains up to 11% compared to the strongest baseline. Further analysis reveals that ComoRAG is particularly advantageous for complex queries requiring global context comprehension, offering a principled, cognitively motivated paradigm towards retrieval-based stateful reasoning. Our framework is made publicly available at https://github.com/EternityJune25/ComoRAG.

Juyuan Wang, Rongchen Zhao, Wei Wei, Yufeng Wang, Mo Yu, Jie Zhou, Jin Xu, Liyan Xu• 2025

Related benchmarks

TaskDatasetResultRank
Long narrative understanding QANoCha--
32
Question AnsweringGraphRAG-Benchmark MEDICAL
Fact Retrieval (FR)58.92
15
Long narrative understanding QANarrativeQA
Accuracy54
14
Generative sense-making QALongBench
Comprehensiveness0.6218
14
Long narrative understanding QAPrelude
Accuracy54.07
14
Question Answering2WikiMultiHopQA 1,000 queries (test)
EM48.4
13
Question AnsweringPopQA 1,000 queries (test)
EM45.8
10
Question AnsweringNQ 1,000 queries (test)
EM38.5
10
Question AnsweringMuSiQue 1,000 queries (test)
EM24.5
10
Question AnsweringHotpotQA 1,000 queries (test)
EM39.9
10
Showing 10 of 11 rows

Other info

Follow for update