Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards robust long-context understanding of large language model via active recap learning

About

In this paper, we propose active recap learning (ARL), a framework for enhancing large language model (LLM) in understanding long contexts. ARL enables models to revisit and summarize earlier content through targeted sequence construction during contined pretraining and retrospective summarization at inference. First, we identify key tokens in prepared long context based on loss gaps between long and short forward contexts and find most revant preceding paragraphs, then summarize them using an LLM. Second, ARL equips models with the ability to autonomously generate and utilize these retrospective summaries during inference, thereby establishing a recursive memory mechanism across paragraphs. Experimental results show substantial gains, with ARL achieving a 26.8% improvement on RULER and a 9.44% improvement on LongBench. Overall, ARL offers a simple yet effective continued pretraining-based approach to strengthen long-context understanding, advancing scalable memory augmentation in LLM

Chenyu Hui• 2026

Related benchmarks

TaskDatasetResultRank
Long-context Language UnderstandingRULER 32k context length
Average Score24.1
30
Long-context Language UnderstandingRULER 16k context length--
8
Long-context UnderstandingRULER 8K context length
Avg Task Score66.5
4
SummarizationLongBench GovReport
Score18.2
2
SummarizationLongBench QMSum
Score15.91
2
SummarizationLongBench MultiNews
Score14.91
2
SummarizationLongBench VCSUM
VCSUM Score9.04
2
Showing 7 of 7 rows

Other info

Follow for update