Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

When LLaVA Meets Objects: Token Composition for Vision-Language-Models

About

Current autoregressive Vision Language Models (VLMs) usually rely on a large number of visual tokens to represent images, resulting in a need for more compute especially at inference time. To address this problem, we propose Mask-LLaVA, a framework that leverages different levels of visual features to create a compact yet information-rich visual representation for autoregressive VLMs. Namely, we combine mask-based object representations together with global tokens and local patch tokens. While all tokens are used during training, it shows that the resulting model can flexibly drop especially the number of mask-based object-tokens at test time, allowing to adapt the number of tokens during inference without the need to retrain the model and without a significant drop in performance. We evaluate the proposed approach on a suite of standard benchmarks showing results competitive to current token efficient methods and comparable to the original LLaVA baseline using only a fraction of visual tokens. Our analysis demonstrates that combining multi-level features enables efficient learning with fewer tokens while allowing dynamic token selection at test time for good performance.

Soumya Jahagirdar, Walid Bousselham, Anna Kukleva, Hilde Kuehne• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy51.8
1525
Object Hallucination EvaluationPOPE
Accuracy85.8
1455
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy74.8
706
Multimodal EvaluationMME
Score1.44e+3
658
Visual Question AnsweringGQA
Accuracy60.2
505
Science Question AnsweringScienceQA
Accuracy70.8
502
Multimodal Capability EvaluationMM-Vet
Score31.1
345
Science Question AnsweringScienceQA IMG
Accuracy68.8
294
Multimodal Model EvaluationMMBench
Accuracy64.9
180
Multimodal EvaluationMM-Vet
Score25.7
180
Showing 10 of 14 rows

Other info

Follow for update