Towards robust long-context understanding of large language model via active recap learning
About
In this paper, we propose active recap learning (ARL), a framework for enhancing large language model (LLM) in understanding long contexts. ARL enables models to revisit and summarize earlier content through targeted sequence construction during contined pretraining and retrospective summarization at inference. First, we identify key tokens in prepared long context based on loss gaps between long and short forward contexts and find most revant preceding paragraphs, then summarize them using an LLM. Second, ARL equips models with the ability to autonomously generate and utilize these retrospective summaries during inference, thereby establishing a recursive memory mechanism across paragraphs. Experimental results show substantial gains, with ARL achieving a 26.8% improvement on RULER and a 9.44% improvement on LongBench. Overall, ARL offers a simple yet effective continued pretraining-based approach to strengthen long-context understanding, advancing scalable memory augmentation in LLM
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Long-context Language Understanding | RULER 32k context length | Average Score24.1 | 30 | |
| Long-context Language Understanding | RULER 16k context length | -- | 8 | |
| Long-context Understanding | RULER 8K context length | Avg Task Score66.5 | 4 | |
| Summarization | LongBench GovReport | Score18.2 | 2 | |
| Summarization | LongBench QMSum | Score15.91 | 2 | |
| Summarization | LongBench MultiNews | Score14.91 | 2 | |
| Summarization | LongBench VCSUM | VCSUM Score9.04 | 2 |