Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities

About

In this work, we present a conceptually simple and effective method to train a strong bilingual/multilingual multimodal representation model. Starting from the pre-trained multimodal representation model CLIP released by OpenAI, we altered its text encoder with a pre-trained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k-CN, COCO-CN and XTD. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding. Our models and code are available at https://github.com/FlagAI-Open/FlagAI.

Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet 1k (test)
Top-1 Accuracy74.7
798
Image ClassificationImageNet A
Top-1 Acc70.4
553
Image ClassificationImageNet V2
Top-1 Acc68.8
487
Image ClassificationImageNet-R
Top-1 Acc87.9
474
Text-to-Image RetrievalFlickr30K
R@172.5
460
Image-to-Text RetrievalFlickr30K
R@186
379
Image ClassificationImageNet-Sketch
Top-1 Accuracy59.2
360
Image-to-Text RetrievalMSCOCO
R@158.6
124
Text-to-Image RetrievalMSCOCO
R@142.9
118
Text-to-Image RetrievalMSCOCO (1K test)
R@16.39e+3
104
Showing 10 of 33 rows

Other info

Code

Follow for update