Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Revisiting Multimodal KV Cache Compression: A Frequency-Domain-Guided Outlier-KV-Aware Approach

About

Multimodal large language models suffer from substantial inference overhead since multimodal KV Cache grows proportionally with the visual input length. Existing multimodal KV Cache compression methods mostly rely on attention score to reduce cache size, which makes them are incompatible with established efficient attention kernels (e.g., FlashAttention) and ignores the contribution of value vectors to the attention output. In this work, we revisit multimodal KV Cache compression from the perspective of the KV matrices' distribution. First, we observe that frequency-domain energy of multimodal KV matrices is predominantly concentrated in low-frequency and extract this principal energy via a low-pass filter. Further, we find that removing KV pairs that deviate substantially from this principal energy leads to a pronounced performance drop, which we define as Outlier KVs. Considering Outlier KVs are more likely to encode features critical for inference, we propose FlashCache, a frequency-domain-guided, Outlier-KV-aware KV Cache compression framework. First, we introduce an Outlier KV Recognition Module that models the principal component of multimodal KV matrices in the frequency domain and preferentially retains KV pairs that significantly deviate from it. Furthermore, Dynamic Budget Allocation Module is designed to adaptively determine the per-layer KV Cache size to retain more Outlier KVs. Experiments on multiple MLLMs and benchmarks demonstrate that FlashCache outperforms state-of-the-art multimoal KV compression methods, achieving up to 1.69 times faster decoding with 80% lower KV memory usage while maintaining task performance.

Yaoxin Yang, Peng Ye, Xudong Tan, Chongjun Tu, Maosen Zhao, Jia Hao, Tao Chen• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMathVista
Accuracy52.6
257
Massive Multi-discipline Multimodal UnderstandingMMMU
Accuracy53.18
152
Multi-image ReasoningMuirBench
Accuracy44.42
61
Multi-modal Long-context BenchmarkingMileBench
Task T Score57.23
39
Multi-image UnderstandingMileBench (test)
Temporal Multi-Image Score (Task T)57.3
21
High-resolution Multi-modal UnderstandingV*
Accuracy80.23
13
Video UnderstandingFAVOR-Bench
AS30.38
13
High-resolution Multi-modal UnderstandingHR-Bench
Accuracy72.38
11
Showing 8 of 8 rows

Other info

Follow for update