Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models

About

In this work, we propose a training-free method to inject visual prompts into Multimodal Large Language Models (MLLMs) through test-time optimization of a learnable latent variable. We observe that attention, as the core module of MLLMs, connects text prompt tokens and visual tokens, ultimately determining the final results. Our approach involves adjusting visual tokens from the MLP output at test time, controlling the attention response to ensure text prompt tokens attend to visual tokens in referring regions. We optimize a learnable latent variable based on an energy function, enhancing the strength of referring regions in the attention map. This enables detailed region description and reasoning without the need for substantial training costs or model retraining. Our method offers a promising direction for integrating referring abilities into MLLMs, and supports referring with box, mask, scribble and point. The results demonstrate that our method exhibits out-of-domain generalization and interpretability.

Mingrui Wu, Xinyue Cai, Jiayi Ji, Jiale Li, Oucheng Huang, Gen Luo, Hao Fei, Guannan Jiang, Xiaoshuai Sun, Rongrong Ji• 2024

Related benchmarks

TaskDatasetResultRank
Referring object classificationLVIS (test)
Accuracy60.79
22
Referring Text ClassificationCOCO-text (test)
Accuracy61.22
15
Vision-Language ReasoningCODA-LM 1.0 (test)
Barrier39.3
13
Multimodal ReasoningGeoBench-VLM
Aerial Score18.1
11
Referring DescriptionRefCOCOg
Recall @ 45.53
6
Showing 5 of 5 rows

Other info

Code

Follow for update