Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Stacked from One: Multi-Scale Self-Injection for Context Window Extension

About

The limited context window of contemporary large language models (LLMs) remains a primary bottleneck for their broader application across diverse domains. Although continual pre-training on long-context data offers a straightforward solution, it incurs prohibitive data acquisition and computational costs. To address this challenge, we propose~\modelname, a novel framework based on multi-grained context compression and query-aware information acquisition. SharedLLM comprises two stacked short-context LLMs: a lower model serving as a compressor and an upper model acting as a decoder. The lower model compresses long inputs into compact, multi-grained representations, which are then forwarded to the upper model for context-aware processing. To maximize efficiency, this information transfer occurs exclusively at the lowest layers, bypassing lengthy forward passes and redundant cross-attention operations. This entire process, wherein the upper and lower models are derived from the same underlying LLM layers, is termed~\textit{self-injection}. To support this architecture, a specialized tree-based data structure enables the efficient encoding and query-aware retrieval of contextual information. Despite being trained on sequences of only 8K tokens, \modelname~effectively generalizes to inputs exceeding 128K tokens. Across a comprehensive suite of long-context modeling and understanding benchmarks, \modelname~achieves performance superior or comparable to strong baselines, striking an optimal balance between efficiency and accuracy. Furthermore, these design choices allow \modelname~to substantially reduce the memory footprint and yield notable inference speedups ($2\times$ over streaming and $3\times$ over encoder-decoder architectures).

Wei Han, Pan Zhou, Soujanya Poria, Shuicheng Yan• 2026

Related benchmarks

TaskDatasetResultRank
Language ModelingPG-19
Perplexity5.96
160
Long-context UnderstandingLongBench (test)--
136
Language ModelingProof-pile
Perplexity2.33
58
Language ModelingarXiv
Perplexity2.46
55
Long-context UnderstandingInfini-Bench (test)
Math Score17.26
21
Language ModelingPG19 4K
Perplexity8.68
8
Language ModelingPG19 16K
Perplexity8.01
8
Language ModelingProofPile (16K)
Perplexity3.24
8
Language ModelingCodeParrot 4K
Perplexity2.33
8
Language ModelingCodeParrot 16K
Perplexity2.25
8
Showing 10 of 17 rows

Other info

Follow for update