Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HiCLIP: Contrastive Language-Image Pretraining with Hierarchy-aware Attention

About

The success of large-scale contrastive vision-language pretraining (CLIP) has benefited both visual recognition and multimodal content understanding. The concise design brings CLIP the advantage in inference efficiency against other vision-language models with heavier cross-attention fusion layers, making it a popular choice for a wide spectrum of downstream tasks. However, CLIP does not explicitly capture the hierarchical nature of high-level and fine-grained semantics conveyed in images and texts, which is arguably critical to vision-language understanding and reasoning. To this end, we equip both the visual and language branches in CLIP with hierarchy-aware attentions, namely Hierarchy-aware CLIP (HiCLIP), to progressively discover semantic hierarchies layer-by-layer from both images and texts in an unsupervised manner. As a result, such hierarchical aggregation significantly improves the cross-modal alignment. To demonstrate the advantages of HiCLIP, we conduct qualitative analysis on its unsupervised hierarchy induction during inference, as well as extensive quantitative experiments on both visual recognition and vision-language downstream tasks.

Shijie Geng, Jianbo Yuan, Yu Tian, Yuxiao Chen, Yongfeng Zhang• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationFood-101
Accuracy68.9
494
Image ClassificationDTD
Accuracy44.1
487
Image ClassificationStanford Cars
Accuracy26.8
477
Image ClassificationSUN397
Accuracy65.2
425
Image ClassificationCIFAR100
Accuracy56.2
331
ClassificationCars
Accuracy5.4
314
Image ClassificationAircraft
Accuracy4.6
302
Image ClassificationOxford-IIIT Pets
Accuracy73.5
259
Image ClassificationCIFAR10
Accuracy80.4
240
Image ClassificationPets
Accuracy43.6
204
Showing 10 of 22 rows

Other info

Follow for update