Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

AmorLIP: Efficient Language-Image Pretraining via Amortization

About

Contrastive Language-Image Pretraining (CLIP) has demonstrated strong zero-shot performance across diverse downstream text-image tasks. Existing CLIP methods typically optimize a contrastive objective using negative samples drawn from each minibatch. To achieve robust representation learning, these methods require extremely large batch sizes and escalate computational demands to hundreds or even thousands of GPUs. Prior approaches to mitigate this issue often compromise downstream performance, prolong training duration, or face scalability challenges with very large datasets. To overcome these limitations, we propose AmorLIP, an efficient CLIP pretraining framework that amortizes expensive computations involved in contrastive learning through lightweight neural networks, which substantially improves training efficiency and performance. Leveraging insights from a spectral factorization of energy-based models, we introduce novel amortization objectives along with practical techniques to improve training stability. Extensive experiments across 38 downstream tasks demonstrate the superior zero-shot classification and retrieval capabilities of AmorLIP, consistently outperforming standard CLIP baselines with substantial relative improvements of up to 12.24%.

Haotian Sun, Yitong Li, Yuchen Zhuang, Niao He, Hanjun Dai, Bo Dai• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet & Variants
Accuracy33.35
17
Image-Text RetrievalMSCOCO Flickr30K Retrieval (test)
Retrieval Score55.23
10
Image-Text RetrievalRetrieval Benchmarks
Average Score24.58
10
Zero-shot ClassificationDataComp (test)
DataComp Average56.24
10
Zero-shot ClassificationImageNet & Variants (test)
Zero-shot Accuracy (ImageNet & Variants)57.12
10
ClassificationDatacomp CC3M track
Average Performance22.89
5
ClassificationDatacomp CC12M track
Average Performance29.86
5
Image ClassificationDataComp
Average Score22.89
5
Image-Text RetrievalRetrieval Datasets
Retrieval Score28.97
5
Multi-modal EvaluationDataComp
Datacomp Average29.86
5
Showing 10 of 14 rows

Other info

Follow for update