Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PyramidCLIP: Hierarchical Feature Alignment for Vision-language Model Pretraining

About

Large-scale vision-language pre-training has achieved promising results on downstream tasks. Existing methods highly rely on the assumption that the image-text pairs crawled from the Internet are in perfect one-to-one correspondence. However, in real scenarios, this assumption can be difficult to hold: the text description, obtained by crawling the affiliated metadata of the image, often suffers from the semantic mismatch and the mutual compatibility. To address these issues, we introduce PyramidCLIP, which constructs an input pyramid with different semantic levels for each modality, and aligns visual elements and linguistic elements in the form of hierarchy via peer-level semantics alignment and cross-level relation alignment. Furthermore, we soften the loss of negative samples (unpaired samples) so as to weaken the strict constraint during the pre-training stage, thus mitigating the risk of forcing the model to distinguish compatible negative pairs. Experiments on five downstream tasks demonstrate the effectiveness of the proposed PyramidCLIP. In particular, with the same amount of 15 million pre-training image-text pairs, PyramidCLIP exceeds CLIP on ImageNet zero-shot classification top-1 accuracy by 10.6%/13.2%/10.0% with ResNet50/ViT-B32/ViT-B16 based image encoder respectively. When scaling to larger datasets, PyramidCLIP achieves the state-of-the-art results on several downstream tasks. In particular, the results of PyramidCLIP-ResNet50 trained on 143M image-text pairs surpass that of CLIP using 400M data on ImageNet zero-shot classification task, significantly improving the data efficiency of CLIP.

Yuting Gao, Jinfeng Liu, Zihan Xu, Jun Zhang, Ke Li, Rongrong Ji, Chunhua Shen• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1k (val)
Top-1 Accuracy50.7
840
Image ClassificationCIFAR-100--
622
Image ClassificationImageNet-1K
Top-1 Acc78
524
Image ClassificationFood-101
Accuracy88.1
494
Image ClassificationDTD
Accuracy79.3
487
Image ClassificationStanford Cars
Accuracy86.9
477
Image ClassificationSUN397
Accuracy79.9
425
Text-to-Image RetrievalFlickr30k (test)
Recall@174.5
423
Image-to-Text RetrievalFlickr30k (test)
R@186.3
370
Image ClassificationAircraft
Accuracy53.1
302
Showing 10 of 28 rows

Other info

Follow for update