Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Memory-Efficient Visual Autoregressive Modeling with Scale-Aware KV Cache Compression

About

Visual Autoregressive (VAR) modeling has garnered significant attention for its innovative next-scale prediction approach, which yields substantial improvements in efficiency, scalability, and zero-shot generalization. Nevertheless, the coarse-to-fine methodology inherent in VAR results in exponential growth of the KV cache during inference, causing considerable memory consumption and computational redundancy. To address these bottlenecks, we introduce ScaleKV, a novel KV cache compression framework tailored for VAR architectures. ScaleKV leverages two critical observations: varying cache demands across transformer layers and distinct attention patterns at different scales. Based on these insights, ScaleKV categorizes transformer layers into two functional groups: drafters and refiners. Drafters exhibit dispersed attention across multiple scales, thereby requiring greater cache capacity. Conversely, refiners focus attention on the current token map to process local details, consequently necessitating substantially reduced cache capacity. ScaleKV optimizes the multi-scale inference pipeline by identifying scale-specific drafters and refiners, facilitating differentiated cache management tailored to each scale. Evaluation on the state-of-the-art text-to-image VAR model family, Infinity, demonstrates that our approach effectively reduces the required KV cache memory to 10% while preserving pixel-level fidelity.

Kunjun Li, Zigeng Chen, Cheng-Yen Yang, Jenq-Neng Hwang• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationDPG-Bench (test)
Global Fidelity81.876
43
Text-to-Image GenerationGenEval 1024x1024
Latency (s)1.37
22
Human Preference EvaluationHPS v2.1
Photo Score29.37
16
Human Preference EvaluationImageReward
Average Score1.035
16
Showing 4 of 4 rows

Other info

Follow for update