Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

From Training-Free to Adaptive: Empirical Insights into MLLMs' Understanding of Detection Information

About

Despite the impressive capabilities of Multimodal Large Language Models (MLLMs) in integrating text and image modalities, challenges remain in accurately interpreting detailed visual elements. Vision detection models excel at recognizing fine-grained image details, prompting researchers to use them to enhance MLLMs. One effective strategy is to infuse detection information in text format, which has proven simple and effective. However, most studies utilize this method without training, leaving the potential of adaptive training largely unexplored. Adaptive training could significantly enhance MLLMs' comprehension of unique inputs while filtering out irrelevant information. This paper addresses the crucial question: How does training impact MLLMs' understanding of infused textual detection information? We systematically experiment with various representative models to evaluate the effects of training-free, retraining, and fine-tuning strategies. We also examine the influence of training on MLLMs' original abilities and the interchangeability of detection models. Our findings indicate that fine-tuning a pre-trained MLLM to incorporate textual detection information delivers superior results compared to training-free and retraining methods, improving performance by 6.71% across 10 widely recognized benchmarks. Furthermore, fine-tuning enables MLLMs to retain performance enhancements even when detection models are swapped, indicating improved understanding of formatted textual data. We release our codes to support further exploration of fusion strategies for vision detection models and the enhancement of MLLMs' fine-grained multimodal capabilities.

Qirui Jiao, Daoyuan Chen, Yilun Huang, Yaliang Li, Ying Shen• 2024

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy88.8
935
Multimodal ReasoningMM-Vet
MM-Vet Score38.9
281
Science Question AnsweringScienceQA SQA-IMG
Accuracy80.7
114
Visual Instruction FollowingLLaVA-W
Score69.5
28
Text-based Visual Question AnsweringTextVQA 52
Accuracy60.1
23
Science Question AnsweringScienceQA IMG 38
Accuracy60.1
21
Multimodal BenchmarkingMM-Bench 37
Accuracy67.3
19
Object Hallucination EvaluationPOPE 28
Accuracy88.9
18
Multimodal Perception EvaluationMME Perception 16
Perception Score1.48e+3
18
Visual Question AnsweringGQA 22
Accuracy60.5
17
Showing 10 of 13 rows

Other info

Code

Follow for update