Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Conformal Cross-Modal Active Learning

About

Foundation models for vision have transformed visual recognition with powerful pretrained representations and strong zero-shot capabilities, yet their potential for data-efficient learning remains largely untapped. Active Learning (AL) aims to minimize annotation costs by strategically selecting the most informative samples for labeling, but existing methods largely overlook the rich multimodal knowledge embedded in modern vision-language models (VLMs). We introduce Conformal Cross-Modal Acquisition (CCMA), a novel AL framework that bridges vision and language modalities through a teacher-student architecture. CCMA employs a pretrained VLM as a teacher to provide semantically grounded uncertainty estimates, conformally calibrated to guide sample selection for a vision-only student model. By integrating multimodal conformal scoring with diversity-aware selection strategies, CCMA achieves superior data efficiency across multiple benchmarks. Our approach consistently outperforms state-of-the-art AL baselines, demonstrating clear advantages over methods relying solely on uncertainty or diversity metrics.

Huy Hoang Nguyen, C\'edric Jung, Shirin Salehi, Tobias Gl\"uck, Anke Schmeink, Andreas Kugi• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationFood101
Accuracy90.8
457
Image ClassificationCIFAR100
Mean Accuracy91.6
55
Image ClassificationDomainNet Real
Mean Accuracy85.5
55
Showing 3 of 3 rows

Other info

Follow for update