Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PACT: Pruning and Clustering-Based Token Reduction for Faster Visual Language Models

About

Visual Language Models require substantial computational resources for inference due to the additional input tokens needed to represent visual information. However, these visual tokens often contain redundant and unimportant information, resulting in an unnecessarily high number of tokens. To address this, we introduce PACT, a method that reduces inference time and memory usage by pruning irrelevant tokens and merging visually redundant ones at an early layer of the language model. Our approach uses a novel importance metric to identify unimportant tokens without relying on attention scores, making it compatible with FlashAttention. We also propose a novel clustering algorithm, called Distance Bounded Density Peak Clustering, which efficiently clusters visual tokens while constraining the distances between elements within a cluster by a predefined threshold. We demonstrate the effectiveness of PACT through extensive experiments.

Mohamed Dhouib, Davide Buscaldi, Sonia Vanier, Aymen Shabou• 2025

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA
Accuracy78.56
1117
Object Hallucination EvaluationPOPE--
935
Video UnderstandingMVBench
Accuracy75.3
247
Visual Question AnsweringChartQA
Accuracy76.36
239
Multimodal UnderstandingMMStar
Accuracy54.8
197
Diagram Question AnsweringAI2D
AI2D Accuracy78.4
196
Video UnderstandingVideoMME--
192
Real-world Visual Question AnsweringRealworldQA
Accuracy58.95
91
Document Visual Question AnsweringDocVQA (val)
Accuracy74
66
Video UnderstandingMLVU--
54
Showing 10 of 19 rows

Other info

Follow for update