M2-Encoder: Advancing Bilingual Image-Text Understanding by Large-scale Efficient Pretraining
About
Vision-language foundation models like CLIP have revolutionized the field of artificial intelligence. Nevertheless, VLM models supporting multi-language, e.g., in both Chinese and English, have lagged due to the relative scarcity of large-scale pretraining datasets. Toward this end, we introduce a comprehensive bilingual (Chinese-English) dataset BM-6B with over 6 billion image-text pairs, aimed at enhancing multimodal foundation models to well understand images in both languages. To handle such a scale of dataset, we propose a novel grouped aggregation approach for image-text contrastive loss computation, which reduces the communication overhead and GPU memory demands significantly, facilitating a 60% increase in training speed. We pretrain a series of bilingual image-text foundation models with an enhanced fine-grained understanding ability on BM-6B, the resulting models, dubbed as $M^2$-Encoders (pronounced "M-Square"), set new benchmarks in both languages for multimodal retrieval and classification tasks. Notably, Our largest $M^2$-Encoder-10B model has achieved top-1 accuracies of 88.5% on ImageNet and 80.7% on ImageNet-CN under a zero-shot classification setting, surpassing previously reported SoTA methods by 2.2% and 21.1%, respectively. The $M^2$-Encoder series represents one of the most comprehensive bilingual image-text foundation models to date, so we are making it available to the research community for further exploration and development.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Image Retrieval | Flickr30K | R@192.2 | 460 | |
| Image-to-Text Retrieval | Flickr30K | R@191.2 | 379 | |
| Image Classification | ImageNet | Top-1 Accuracy88.5 | 324 | |
| Image-to-Text Retrieval | MSCOCO | R@172.8 | 124 | |
| Text-to-Image Retrieval | MSCOCO | R@156.5 | 118 | |
| Image-to-Text Retrieval | Flickr30K-CN | R@193.8 | 99 | |
| Text-to-Image Retrieval | Flickr30K-CN | R@181.5 | 99 | |
| Image Retrieval | CARS196 | -- | 56 | |
| Text-to-Image Retrieval | COCO-CN | R@178.7 | 49 | |
| Image-to-Text Retrieval | COCO-CN | R@180.9 | 48 |