Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning to Select Visual In-Context Demonstrations

About

Multimodal Large Language Models (MLLMs) adapt to visual tasks via in-context learning (ICL), which relies heavily on demonstration quality. The dominant demonstration selection strategy is unsupervised k-Nearest Neighbor (kNN) search. While simple, this similarity-first approach is sub-optimal for complex factual regression tasks; it selects redundant examples that fail to capture the task's full output range. We reframe selection as a sequential decision-making problem and introduce Learning to Select Demonstrations (LSD), training a Reinforcement Learning agent to construct optimal demonstration sets. Using a Dueling DQN with a query-centric Transformer Decoder, our agent learns a policy that maximizes MLLM downstream performance. Evaluating across five visual regression benchmarks, we uncover a crucial dichotomy: while kNN remains optimal for subjective preference tasks, LSD significantly outperforms baselines on objective, factual regression tasks. By balancing visual relevance with diversity, LSD better defines regression boundaries, illuminating when learned selection is strictly necessary for visual ICL.

Eugene Lee, Yu-Chi Lin, Jiajie Diao• 2026

Related benchmarks

TaskDatasetResultRank
Image Aesthetic AssessmentAVA--
68
Age EstimationUTKFace
MAE5.9
27
Image Quality AssessmentKADID-10K--
21
Image Quality AssessmentKonIQ-10k
MAE0.39
14
Attractiveness RatingSCUT-FBP5500
MAE0.55
14
Showing 5 of 5 rows

Other info

Follow for update