UT-ACA: Uncertainty-Triggered Adaptive Context Allocation for Long-Context Inference
About
Long-context inference remains challenging for large language models due to attention dilution and out-of-distribution degradation. Context selection mitigates this limitation by attending to a subset of key-value cache entries, yet most methods allocate a fixed context budget throughout decoding despite highly non-uniform token-level contextual demands. To address this issue, we propose Uncertainty-Triggered Adaptive Context Allocation (UT-ACA), an inference-time framework that dynamically adjusts the context window based on token-wise uncertainty. UT-ACA learns an uncertainty detector that combines semantic embeddings with logit-based confidence while accounting for uncertainty accumulation across decoding steps. When insufficient evidence is indicated, UT-ACA selectively rolls back, expands the context window, and regenerates the token with additional support. Experiments show that UT-ACA substantially reduces average context usage while preserving generation quality in long-context settings.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Context Management | Long-context (test) | mTokens119 | 19 | |
| Summarization | LongBench Summary (test) | Score28.43 | 17 | |
| Question Answering | ∞-Bench Longbook QA English (test) | Tokens3.71e+3 | 9 | |
| Summarization | LongBench samsum | mTokens285 | 8 | |
| Question Answering | RULER QA-16k (test) | Token Count385 | 8 | |
| Question Answering | RULER QA-8k (test) | Token Count352 | 8 | |
| Question Answering | LongBench multifieldqa | Mean Tokens Used133 | 8 | |
| Question Answering | LongBench narrativeqa | Tokens252 | 8 | |
| Biography summarization | Biography summarization (val) | Output Tokens29 | 4 |