HIFICL: High-Fidelity In-Context Learning for Multimodal Tasks
About
In-Context Learning (ICL) is a significant paradigm for Large Multimodal Models (LMMs), using a few in-context demonstrations (ICDs) for new task adaptation. However, its performance is sensitive to demonstration configurations and computationally expensive. Mathematically, the influence of these demonstrations can be decomposed into a dynamic mixture of the standard attention output and the context values. Current approximation methods simplify this process by learning a "shift vector". Inspired by the exact decomposition, we introduce High-Fidelity In-Context Learning (HIFICL) to more faithfully model the ICL mechanism. HIFICL consists of three key components: 1) a set of "virtual key-value pairs" to act as a learnable context, 2) a low-rank factorization for stable and regularized training, and 3) a simple end-to-end training objective. From another perspective, this mechanism constitutes a form of context-aware Parameter-Efficient Fine-Tuning (PEFT). Extensive experiments show that HiFICL consistently outperforms existing approximation methods on several multimodal benchmarks. The code is available at https://github.com/bbbandari/HiFICL.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | OK-VQA | Accuracy73.12 | 260 | |
| Image Captioning | COCO | CIDEr115.2 | 130 | |
| Visual Question Answering | VQA 10,000 samples v2 (val) | Accuracy (VQA)74.66 | 12 | |
| Visual Question Answering | OK-VQA full v1.0 (val) | VQA Accuracy59.56 | 12 | |
| Hallucination Analysis | COCO Captioning (val) | CHAIRs3.2 | 6 |