Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Less is More: A Simple yet Effective Token Reduction Method for Efficient Multi-modal LLMs

About

The rapid advancement of Multimodal Large Language Models (MLLMs) has led to remarkable performances across various domains. However, this progress is accompanied by a substantial surge in the resource consumption of these models. We address this pressing issue by introducing a new approach, Token Reduction using CLIP Metric (TRIM), aimed at improving the efficiency of MLLMs without sacrificing their performance. Inspired by human attention patterns in Visual Question Answering (VQA) tasks, TRIM presents a fresh perspective on the selection and reduction of image tokens. The TRIM method has been extensively tested across 12 datasets, and the results demonstrate a significant reduction in computational overhead while maintaining a consistent level of performance. This research marks a critical stride in efficient MLLM development, promoting greater accessibility and sustainability of high-performing models.

Dingjie Song, Wenjun Wang, Shunian Chen, Xidong Wang, Michael Guan, Benyou Wang• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy76.4
1165
Visual Question AnsweringGQA
Accuracy61.4
963
Object Hallucination EvaluationPOPE
Accuracy85.3
935
Multimodal EvaluationMME--
557
Text-based Visual Question AnsweringTextVQA
Accuracy53.7
496
Visual Question AnsweringGQA
Accuracy58.4
374
Multimodal UnderstandingMMBench--
367
Multimodal Capability EvaluationMM-Vet
Score28
282
Science Question AnsweringScienceQA
Accuracy48.1
229
Multimodal Model EvaluationMMBench
Accuracy67.4
180
Showing 10 of 20 rows

Other info

Follow for update