Disco-RAG: Discourse-Aware Retrieval-Augmented Generation
About
Retrieval-Augmented Generation (RAG) has emerged as an important means of enhancing the performance of large language models (LLMs) in knowledge-intensive tasks. However, most existing RAG strategies treat retrieved passages in a flat and unstructured way, which prevents the model from capturing structural cues and constrains its ability to synthesize knowledge from dispersed evidence across documents. To overcome these limitations, we propose Disco-RAG, a discourse-aware framework that explicitly injects discourse signals into the generation process. Our method constructs intra-chunk discourse trees to capture local hierarchies and builds inter-chunk rhetorical graphs to model cross-passage coherence. These structures are jointly integrated into a planning blueprint that conditions the generation. Experiments on question answering and long-document summarization benchmarks show the efficacy of our approach. Disco-RAG achieves state-of-the-art results on the benchmarks without fine-tuning. These findings underscore the important role of discourse structure in advancing RAG systems.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Chain-of-reasoning | Loong Set 2: 50K–100K Tokens | LLM Score58.23 | 12 | |
| Chain-of-reasoning | Loong Set 3: 100K–200K Tokens | LLM Score0.5217 | 12 | |
| Chain-of-reasoning | Loong Set 4: 200K–250K Tokens | LLM Score36.17 | 12 | |
| Clustering | Loong Set 1: 10K–50K Tokens | LLM Score0.6536 | 12 | |
| Clustering | Loong Set 2: 50K–100K Tokens | LLM Score61.67 | 12 | |
| Clustering | Loong Set 3: 100K–200K Tokens | LLM Score58.85 | 12 | |
| Clustering | Loong Set 4: 200K–250K Tokens | LLM Score57.53 | 12 | |
| Comparison | Loong Set 1: 10K–50K Tokens | LLM Score75.65 | 12 | |
| Comparison | Loong Set 2: 50K–100K Tokens | LLM Score64.34 | 12 | |
| Comparison | Loong Set 3: 100K–200K Tokens | LLM Score57.84 | 12 |