Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Long-CLIP: Unlocking the Long-Text Capability of CLIP

About

Contrastive Language-Image Pre-training (CLIP) has been the cornerstone for zero-shot classification, text-image retrieval, and text-image generation by aligning image and text modalities. Despite its widespread adoption, a significant limitation of CLIP lies in the inadequate length of text input. The length of the text token is restricted to 77, and an empirical study shows the actual effective length is even less than 20. This prevents CLIP from handling detailed descriptions, limiting its applications for image retrieval and text-to-image generation with extensive prerequisites. To this end, we propose Long-CLIP as a plug-and-play alternative to CLIP that supports long-text input, retains or even surpasses its zero-shot generalizability, and aligns the CLIP latent space, making it readily replace CLIP without any further adaptation in downstream frameworks. Nevertheless, achieving this goal is far from straightforward, as simplistic fine-tuning can result in a significant degradation of CLIP's performance. Moreover, substituting the text encoder with a language model supporting longer contexts necessitates pretraining with vast amounts of data, incurring significant expenses. Accordingly, Long-CLIP introduces an efficient fine-tuning solution on CLIP with two novel strategies designed to maintain the original capabilities, including (1) a knowledge-preserved stretching of positional embedding and (2) a primary component matching of CLIP features. With leveraging just one million extra long text-image pairs, Long-CLIP has shown the superiority to CLIP for about 20% in long caption text-image retrieval and 6% in traditional text-image retrieval tasks, e.g., COCO and Flickr30k. Furthermore, Long-CLIP offers enhanced capabilities for generating images from detailed text descriptions by replacing CLIP in a plug-and-play manner.

Beichen Zhang, Pan Zhang, Xiaoyi Dong, Yuhang Zang, Jiaqi Wang• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet V2--
611
Image ClassificationCIFAR10 (test)
Accuracy85.52
585
Text-to-Image RetrievalFlickr30K
R@141.2
531
Image-to-Text RetrievalFlickr30K 1K (test)--
491
Text-to-Image RetrievalFlickr30k (test)
Recall@176.22
445
Image ClassificationSUN397
Accuracy72.5
441
Text-to-Image RetrievalFlickr30K 1K (test)--
432
Image-to-Text RetrievalFlickr30K
R@153.4
429
Text-to-Video RetrievalDiDeMo (test)
R@132.4
399
Image-to-Text RetrievalFlickr30k (test)
R@190
392
Showing 10 of 127 rows
...

Other info

Follow for update