Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

HIFICL: High-Fidelity In-Context Learning for Multimodal Tasks

About

In-Context Learning (ICL) is a significant paradigm for Large Multimodal Models (LMMs), using a few in-context demonstrations (ICDs) for new task adaptation. However, its performance is sensitive to demonstration configurations and computationally expensive. Mathematically, the influence of these demonstrations can be decomposed into a dynamic mixture of the standard attention output and the context values. Current approximation methods simplify this process by learning a "shift vector". Inspired by the exact decomposition, we introduce High-Fidelity In-Context Learning (HIFICL) to more faithfully model the ICL mechanism. HIFICL consists of three key components: 1) a set of "virtual key-value pairs" to act as a learnable context, 2) a low-rank factorization for stable and regularized training, and 3) a simple end-to-end training objective. From another perspective, this mechanism constitutes a form of context-aware Parameter-Efficient Fine-Tuning (PEFT). Extensive experiments show that HiFICL consistently outperforms existing approximation methods on several multimodal benchmarks. The code is available at https://github.com/bbbandari/HiFICL.

Xiaoyu Li, Yuhang Liu, Xuanshuo Kang, Zheng Luo, Fangqi Lou, Xiaohua Wu, Zihan Xiong• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringOK-VQA
Accuracy73.12
260
Image CaptioningCOCO
CIDEr115.2
130
Visual Question AnsweringVQA 10,000 samples v2 (val)
Accuracy (VQA)74.66
12
Visual Question AnsweringOK-VQA full v1.0 (val)
VQA Accuracy59.56
12
Hallucination AnalysisCOCO Captioning (val)
CHAIRs3.2
6
Showing 5 of 5 rows

Other info

Follow for update