Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

When LLaVA Meets Objects: Token Composition for Vision-Language-Models

About

Current autoregressive Vision Language Models (VLMs) usually rely on a large number of visual tokens to represent images, resulting in a need for more compute especially at inference time. To address this problem, we propose Mask-LLaVA, a framework that leverages different levels of visual features to create a compact yet information-rich visual representation for autoregressive VLMs. Namely, we combine mask-based object representations together with global tokens and local patch tokens. While all tokens are used during training, it shows that the resulting model can flexibly drop especially the number of mask-based object-tokens at test time, allowing to adapt the number of tokens during inference without the need to retrain the model and without a significant drop in performance. We evaluate the proposed approach on a suite of standard benchmarks showing results competitive to current token efficient methods and comparable to the original LLaVA baseline using only a fraction of visual tokens. Our analysis demonstrates that combining multi-level features enables efficient learning with fewer tokens while allowing dynamic token selection at test time for good performance.

Soumya Jahagirdar, Walid Bousselham, Anna Kukleva, Hilde Kuehne• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy51.8
1043
Object Hallucination EvaluationPOPE
Accuracy85.8
935
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy74.8
664
Multimodal EvaluationMME
Score1.44e+3
557
Visual Question AnsweringGQA
Accuracy60.2
374
Multimodal Capability EvaluationMM-Vet
Score31.1
282
Science Question AnsweringScienceQA IMG
Accuracy68.8
256
Science Question AnsweringScienceQA
Accuracy70.8
229
Multimodal Model EvaluationMMBench
Accuracy64.9
180
Multimodal EvaluationMM-Vet--
122
Showing 10 of 14 rows

Other info

Follow for update