Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HiRED: Attention-Guided Token Dropping for Efficient Inference of High-Resolution Vision-Language Models

About

High-resolution Vision-Language Models (VLMs) are widely used in multimodal tasks to enhance accuracy by preserving detailed image information. However, these models often generate an excessive number of visual tokens due to the need to encode multiple partitions of a high-resolution image input. Processing such a large number of visual tokens through multiple transformer networks poses significant computational challenges, particularly for resource-constrained commodity GPUs. To address this challenge, we propose High-Resolution Early Dropping (HiRED), a plug-and-play token-dropping method designed to operate within a fixed token budget. HiRED leverages the attention of CLS token in the vision transformer (ViT) to assess the visual content of the image partitions and allocate an optimal token budget for each partition accordingly. The most informative visual tokens from each partition within the allocated budget are then selected and passed to the subsequent Large Language Model (LLM). We showed that HiRED achieves superior accuracy and performance, compared to existing token-dropping methods. Empirically, HiRED-20% (i.e., a 20% token budget) on LLaVA-Next-7B achieves a 4.7x increase in token generation throughput, reduces response latency by 78%, and saves 14% of GPU memory for single inference on an NVIDIA TESLA P40 (24 GB). For larger batch sizes (e.g., 4), HiRED-20% prevents out-of-memory errors by cutting memory usage by 30%, while preserving throughput and latency benefits. Code - https://github.com/hasanar1f/HiRED

Kazi Hasan Ibn Arif, JinYi Yoon, Dimitrios S. Nikolopoulos, Hans Vandierendonck, Deepu John, Bo Ji• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy69.7
1165
Visual Question AnsweringTextVQA
Accuracy65.2
1117
Visual Question AnsweringGQA--
963
Object Hallucination EvaluationPOPE
Accuracy87.7
935
Multimodal EvaluationMME--
557
Text-based Visual Question AnsweringTextVQA
Accuracy44.2
496
Visual Question AnsweringGQA
Accuracy59.42
374
Multimodal UnderstandingMMBench
Accuracy62.8
367
Science Question AnsweringScienceQA
Accuracy68.4
229
Multimodal UnderstandingMMBench CN
Accuracy51.3
162
Showing 10 of 18 rows

Other info

Follow for update