Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LOVM: Language-Only Vision Model Selection

About

Pre-trained multi-modal vision-language models (VLMs) are becoming increasingly popular due to their exceptional performance on downstream vision applications, particularly in the few- and zero-shot settings. However, selecting the best-performing VLM for some downstream applications is non-trivial, as it is dataset and task-dependent. Meanwhile, the exhaustive evaluation of all available VLMs on a novel application is not only time and computationally demanding but also necessitates the collection of a labeled dataset for evaluation. As the number of open-source VLM variants increases, there is a need for an efficient model selection strategy that does not require access to a curated evaluation dataset. This paper proposes a novel task and benchmark for efficiently evaluating VLMs' zero-shot performance on downstream applications without access to the downstream task dataset. Specifically, we introduce a new task LOVM: Language-Only Vision Model Selection, where methods are expected to perform both model selection and performance prediction based solely on a text description of the desired downstream application. We then introduced an extensive LOVM benchmark consisting of ground-truth evaluations of 35 pre-trained VLMs and 23 datasets, where methods are expected to rank the pre-trained VLMs and predict their zero-shot performance.

Orr Zohar, Shih-Cheng Huang, Kuan-Chieh Wang, Serena Yeung• 2023

Related benchmarks

TaskDatasetResultRank
Model SelectionLOVM average over 23 datasets
R5 Score0.446
8
Model SelectionLOVM VLM Zoo original
R_S Score0.446
8
Vision-Language Model Selection21 datasets VLM selection few-shot
NDCG@50.544
5
Showing 3 of 3 rows

Other info

Follow for update