Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

OmniZip: Audio-Guided Dynamic Token Compression for Fast Omnimodal Large Language Models

About

Omnimodal large language models (OmniLLMs) have attracted increasing research attention of late towards unified audio-video understanding, wherein processing audio-video token sequences creates a significant computational bottleneck, however. Existing token compression methods have yet to accommodate this emerging need of jointly compressing multimodal tokens. To bridge this gap, we present OmniZip, a training-free, audio-guided audio-visual token-compression framework that optimizes multimodal token representation and accelerates inference. Specifically, OmniZip first identifies salient audio tokens, then computes an audio retention score for each time group to capture information density, thereby dynamically guiding video token pruning and preserving cues from audio anchors enhanced by cross-modal similarity. For each time window, OmniZip compresses the video tokens using an interleaved spatio-temporal scheme. Extensive empirical results demonstrate the merits of OmniZip - it achieves 3.42X inference speedup and 1.4X memory reduction over other top-performing counterparts, while maintaining performance with no training.

Keda Tao, Kele Shao, Bohan Yu, Weiqiang Wang, Jian liu, Huan Wang• 2025

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringVideoMME--
99
Audio-visual understandingDailyOmni
Average Score67.7
49
Audio-Visual Question AnsweringWorldSense
Accuracy48.9
18
Audio-Visual Question AnsweringOmniVideoBench
Accuracy0.351
18
Audio-Visual Question Answeringvideo-SALMONN 2 (test)
Miss Rate34.1
18
Showing 5 of 5 rows

Other info

Follow for update