DAST: Context-Aware Compression in LLMs via Dynamic Allocation of Soft Tokens
About
Large Language Models (LLMs) face computational inefficiencies and redundant processing when handling long context inputs, prompting a focus on compression techniques. While existing semantic vector-based compression methods achieve promising performance, these methods fail to account for the intrinsic information density variations between context chunks, instead allocating soft tokens uniformly across context chunks. This uniform distribution inevitably diminishes allocation to information-critical regions. To address this, we propose Dynamic Allocation of Soft Tokens (DAST), a simple yet effective method that leverages the LLM's intrinsic understanding of contextual relevance to guide compression. DAST combines perplexity-based local information with attention-driven global information to dynamically allocate soft tokens to the informative-rich chunks, enabling effective, context-aware compression. Experimental results across multiple benchmarks demonstrate that DAST surpasses state-of-the-art methods.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Question Answering | TextbookQA MRQA out-of-domain evaluation | EM22.75 | 37 | |
| Question Answering | RelExt MRQA out-of-domain evaluation | EM35.92 | 37 | |
| Question Answering | MRQA 2019 (dev) | -- | 32 | |
| Question Answering | DuoRC MRQA out-of-domain evaluation | EM14.26 | 23 | |
| Question Answering | DROP MRQA out-of-domain evaluation | EM22.42 | 23 | |
| Question Answering | MRQA Average across 6 domains | EM20.95 | 23 | |
| Question Answering | RACE MRQA out-of-domain evaluation | EM3.26 | 23 | |
| Question Answering | BioASQ MRQA out-of-domain | F1 Score36.57 | 8 |