Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Exploring Task-Level Optimal Prompts for Visual In-Context Learning

About

With the development of Vision Foundation Models (VFMs) in recent years, Visual In-Context Learning (VICL) has become a better choice compared to modifying models in most scenarios. Different from retraining or fine-tuning model, VICL does not require modifications to the model's weights or architecture, and only needs a prompt with demonstrations to teach VFM how to solve tasks. Currently, significant computational cost for finding optimal prompts for every test sample hinders the deployment of VICL, as determining which demonstrations to use for constructing prompts is very costly. In this paper, however, we find a counterintuitive phenomenon that most test samples actually achieve optimal performance under the same prompts, and searching for sample-level prompts only costs more time but results in completely identical prompts. Therefore, we propose task-level prompting to reduce the cost of searching for prompts during the inference stage and introduce two time-saving yet effective task-level prompt search strategies. Extensive experimental results show that our proposed method can identify near-optimal prompts and reach the best VICL performance with a minimal cost that prior work has never achieved.

Yan Zhu, Huan Ma, Changqing Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Semantic segmentationPASCAL-5^i Fold-0
mIoU39.09
75
Semantic segmentationPASCAL-5^i Fold-1
mIoU44.37
75
Semantic segmentationPASCAL-5^i Fold-2
mIoU37.93
75
Semantic segmentationPASCAL-5^i Fold-3
mIoU32.4
75
Foreground segmentationPascal-5i Fold-1 (test)
mIoU44.37
25
Foreground segmentationPascal-5i (3)
mIoU30.84
25
Foreground segmentationPascal-5i Fold-0 (test)
mIoU39.09
25
Single Object DetectionPASCAL VOC 2012 (test)
mIoU29.03
24
ColoringImageNet-1K
MSE0.62
19
Image ColorizationImageNet 1k (test)
MSE0.62
17
Showing 10 of 16 rows

Other info

Follow for update