Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching
About
Despite surprising performance on zero-shot transfer, pre-training a large-scale multimodal model is often prohibitive as it requires a huge amount of data and computing resources. In this paper, we propose a method (BeamCLIP) that can effectively transfer the representations of a large pre-trained multimodal model (CLIP-ViT) into a small target model (e.g., ResNet-18). For unsupervised transfer, we introduce cross-modal similarity matching (CSM) that enables a student model to learn the representations of a teacher model by matching the relative similarity distribution across text prompt embeddings. To better encode the text prompts, we design context-based prompt augmentation (CPA) that can alleviate the lexical ambiguity of input text prompts. Our experiments show that unsupervised representation transfer of a pre-trained vision-language model enables a small ResNet-18 to achieve a better ImageNet-1K top-1 linear probe accuracy (66.2%) than vision-only self-supervised learning (SSL) methods (e.g., SimCLR: 51.8%, SwAV: 63.7%), while closing the gap with supervised learning (69.8%).
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | ImageNet-1K 1.0 (val) | Top-1 Accuracy75.1 | 1866 | |
| Image Classification | ImageNet 1k (test) | Top-1 Accuracy57.5 | 798 | |
| Image Classification | ImageNet-1K | -- | 524 | |
| Image Classification | CIFAR100 | Accuracy67.35 | 331 | |
| Image Classification | Flowers-102 | Top-1 Acc75.86 | 141 | |
| Image Classification | STL10 | Accuracy97.45 | 60 | |
| Image Classification | Pets37 | Accuracy86.94 | 4 |