CLIP-MoE: Towards Building Mixture of Experts for CLIP with Diversified Multiplet Upcycling
About
Contrastive Language-Image Pre-training (CLIP) has become a cornerstone in multimodal intelligence. However, recent studies discovered that CLIP can only encode one aspect of the feature space, leading to substantial information loss and indistinctive features. To mitigate this issue, this paper introduces a novel strategy that fine-tunes a series of complementary CLIP models and transforms them into a CLIP-MoE. Specifically, we propose a model-agnostic Diversified Multiplet Upcycling (DMU) framework for CLIP. Instead of training multiple CLIP models from scratch, DMU leverages a pre-trained CLIP and fine-tunes it into a diverse set with highly cost-effective multistage contrastive learning, thus capturing distinct feature subspaces efficiently. To fully exploit these fine-tuned models while minimizing computational overhead, we transform them into a CLIP-MoE, which dynamically activates a subset of CLIP experts, achieving an effective balance between model capacity and computational cost. Comprehensive experiments demonstrate the superior performance of CLIP-MoE across various zero-shot retrieval, zero-shot image classification tasks, and downstream Multimodal Large Language Model (MLLM) benchmarks when used as a vision encoder.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | ImageNet V2 | -- | 611 | |
| Image Classification | EuroSAT | Accuracy62.2 | 569 | |
| Image Classification | Flowers102 | Accuracy72.1 | 558 | |
| Image Classification | DTD | Accuracy54.9 | 542 | |
| Text-to-Image Retrieval | Flickr30K | R@142.1 | 531 | |
| Image Classification | Food101 | Accuracy88.7 | 457 | |
| Image Classification | SUN397 | Accuracy70.1 | 441 | |
| Image-to-Text Retrieval | Flickr30K | R@160.5 | 429 | |
| Image Classification | Aircraft | Accuracy29 | 333 | |
| Image Classification | StanfordCars | Accuracy74.9 | 312 |