Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Matryoshka Multimodal Models

About

Large Multimodal Models (LMMs) such as LLaVA have shown strong performance in visual-linguistic reasoning. These models first embed images into a fixed large number of visual tokens and then feed them into a Large Language Model (LLM). However, this design causes an excessive number of tokens for dense visual scenarios such as high-resolution images and videos, leading to great inefficiency. While token pruning/merging methods do exist, they produce a single length output for each image and do not afford flexibility in trading off information density v.s. efficiency. Inspired by the concept of Matryoshka Dolls, we propose M3: Matryoshka Multimodal Models, which learns to represent visual content as nested sets of visual tokens that capture information across multiple coarse-to-fine granularities. Our approach offers several unique benefits for LMMs: (1) One can explicitly control the visual granularity per test instance during inference, e.g. , adjusting the number of tokens used to represent an image based on the anticipated complexity or simplicity of the content; (2) M3 provides a framework for analyzing the granularity needed for existing datasets, where we find that COCO-style benchmarks only need around ~9 visual tokens to obtain accuracy similar to that of using all 576 tokens; (3) Our approach provides a foundation to explore the best trade-off between performance and visual token length at sample level, where our investigation reveals that a large gap exists between the oracle upper bound and current fixed-scale representations.

Mu Cai, Jianwei Yang, Jianfeng Gao, Yong Jae Lee• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy53.5
1525
Object Hallucination EvaluationPOPE
Accuracy88.6
1455
Visual Question AnsweringVQA v2
Accuracy79.2
1362
Visual Question AnsweringTextVQA
Accuracy60.4
1285
Text-based Visual Question AnsweringTextVQA
Accuracy58.7
807
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy76.9
706
Multimodal EvaluationMME
Score1.48e+3
658
Visual Question AnsweringGQA
Accuracy59.7
505
Visual Question AnsweringChartQA
Accuracy64.7
371
OCR EvaluationOCRBench
Score58
329
Showing 10 of 34 rows

Other info

Follow for update