Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training

About

Contrastive pretraining of image-text foundation models, such as CLIP, demonstrated excellent zero-shot performance and improved robustness on a wide range of downstream tasks. However, these models utilize large transformer-based encoders with significant memory and latency overhead which pose challenges for deployment on mobile devices. In this work, we introduce MobileCLIP -- a new family of efficient image-text models optimized for runtime performance along with a novel and efficient training approach, namely multi-modal reinforced training. The proposed training approach leverages knowledge transfer from an image captioning model and an ensemble of strong CLIP encoders to improve the accuracy of efficient models. Our approach avoids train-time compute overhead by storing the additional knowledge in a reinforced dataset. MobileCLIP sets a new state-of-the-art latency-accuracy tradeoff for zero-shot classification and retrieval tasks on several datasets. Our MobileCLIP-S2 variant is 2.3$\times$ faster while more accurate compared to previous best CLIP model based on ViT-B/16. We further demonstrate the effectiveness of our multi-modal reinforced training by training a CLIP model based on ViT-B/16 image backbone and achieving +2.9% average performance improvement on 38 evaluation benchmarks compared to the previous best. Moreover, we show that the proposed approach achieves 10$\times$-1000$\times$ improved learning efficiency when compared with non-reinforced CLIP training. Code and models are available at https://github.com/apple/ml-mobileclip .

Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet A
Top-1 Acc58.7
654
Image ClassificationImageNet V2
Top-1 Acc69.8
611
Text-to-Image RetrievalFlickr30K
R@177.3
531
Image ClassificationImageNet-R
Top-1 Acc89.6
529
Text-to-Image RetrievalFlickr30k (test)
Recall@174.9
445
Image-to-Text RetrievalFlickr30K
R@192.3
429
Image ClassificationImageNet-Sketch
Top-1 Accuracy64.5
407
Image ClassificationImageNet (val)
Accuracy76.8
300
Image ClassificationObjectNet--
219
Text-to-Image RetrievalCOCO
Recall@150.6
156
Showing 10 of 25 rows

Other info

Follow for update