Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Mimic In-Context Learning for Multimodal Tasks

About

Recently, In-context Learning (ICL) has become a significant inference paradigm in Large Multimodal Models (LMMs), utilizing a few in-context demonstrations (ICDs) to prompt LMMs for new tasks. However, the synergistic effects in multimodal data increase the sensitivity of ICL performance to the configurations of ICDs, stimulating the need for a more stable and general mapping function. Mathematically, in Transformer-based models, ICDs act as "shift vectors" added to the hidden states of query tokens. Inspired by this, we introduce Mimic In-Context Learning (MimIC) to learn stable and generalizable shift effects from ICDs. Specifically, compared with some previous shift vector-based methods, MimIC more strictly approximates the shift effects by integrating lightweight learnable modules into LMMs with four key enhancements: 1) inserting shift vectors after attention layers, 2) assigning a shift vector to each attention head, 3) making shift magnitude query-dependent, and 4) employing a layer-wise alignment loss. Extensive experiments on two LMMs (Idefics-9b and Idefics2-8b-base) across three multimodal tasks (VQAv2, OK-VQA, Captioning) demonstrate that MimIC outperforms existing shift vector-based methods. The code is available at https://github.com/Kamichanw/MimIC.

Yuchu Jiang, Jiale Fu, Chenduo Hao, Xinting Hu, Yingzhe Peng, Xin Geng, Xu Yang• 2025

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringOK-VQA
Accuracy69.43
260
Image CaptioningCOCO
CIDEr114.9
130
Visual Question AnsweringVQA 10,000 samples v2 (val)
Accuracy (VQA)74.4
12
Visual Question AnsweringOK-VQA full v1.0 (val)
VQA Accuracy58.74
12
Hallucination AnalysisCOCO Captioning (val)
CHAIRs4
6
Showing 5 of 5 rows

Other info

Follow for update