Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

UT-ACA: Uncertainty-Triggered Adaptive Context Allocation for Long-Context Inference

About

Long-context inference remains challenging for large language models due to attention dilution and out-of-distribution degradation. Context selection mitigates this limitation by attending to a subset of key-value cache entries, yet most methods allocate a fixed context budget throughout decoding despite highly non-uniform token-level contextual demands. To address this issue, we propose Uncertainty-Triggered Adaptive Context Allocation (UT-ACA), an inference-time framework that dynamically adjusts the context window based on token-wise uncertainty. UT-ACA learns an uncertainty detector that combines semantic embeddings with logit-based confidence while accounting for uncertainty accumulation across decoding steps. When insufficient evidence is indicated, UT-ACA selectively rolls back, expands the context window, and regenerates the token with additional support. Experiments show that UT-ACA substantially reduces average context usage while preserving generation quality in long-context settings.

Lang Zhou, Shuxuan Li, Zhuohao Li, Shi Liu, Zhilin Zhao, Wei-Shi Zheng• 2026

Related benchmarks

TaskDatasetResultRank
Context ManagementLong-context (test)
mTokens119
19
SummarizationLongBench Summary (test)
Score28.43
17
Question Answering∞-Bench Longbook QA English (test)
Tokens3.71e+3
9
SummarizationLongBench samsum
mTokens285
8
Question AnsweringRULER QA-16k (test)
Token Count385
8
Question AnsweringRULER QA-8k (test)
Token Count352
8
Question AnsweringLongBench multifieldqa
Mean Tokens Used133
8
Question AnsweringLongBench narrativeqa
Tokens252
8
Biography summarizationBiography summarization (val)
Output Tokens29
4
Showing 9 of 9 rows

Other info

Follow for update