Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DAST: Context-Aware Compression in LLMs via Dynamic Allocation of Soft Tokens

About

Large Language Models (LLMs) face computational inefficiencies and redundant processing when handling long context inputs, prompting a focus on compression techniques. While existing semantic vector-based compression methods achieve promising performance, these methods fail to account for the intrinsic information density variations between context chunks, instead allocating soft tokens uniformly across context chunks. This uniform distribution inevitably diminishes allocation to information-critical regions. To address this, we propose Dynamic Allocation of Soft Tokens (DAST), a simple yet effective method that leverages the LLM's intrinsic understanding of contextual relevance to guide compression. DAST combines perplexity-based local information with attention-driven global information to dynamically allocate soft tokens to the informative-rich chunks, enabling effective, context-aware compression. Experimental results across multiple benchmarks demonstrate that DAST surpasses state-of-the-art methods.

Shaoshen Chen, Yangning Li, Zishan Xu, Yinghui Li, Xin Su, Zifei Shan, Hai-tao Zheng• 2025

Related benchmarks

TaskDatasetResultRank
Question AnsweringTextbookQA MRQA out-of-domain evaluation
EM22.75
37
Question AnsweringRelExt MRQA out-of-domain evaluation
EM35.92
37
Question AnsweringMRQA 2019 (dev)--
32
Question AnsweringDuoRC MRQA out-of-domain evaluation
EM14.26
23
Question AnsweringDROP MRQA out-of-domain evaluation
EM22.42
23
Question AnsweringMRQA Average across 6 domains
EM20.95
23
Question AnsweringRACE MRQA out-of-domain evaluation
EM3.26
23
Question AnsweringBioASQ MRQA out-of-domain
F1 Score36.57
8
Showing 8 of 8 rows

Other info

Follow for update