Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

TULIP: Token-length Upgraded CLIP

About

We address the challenge of representing long captions in vision-language models, such as CLIP. By design these models are limited by fixed, absolute positional encodings, restricting inputs to a maximum of 77 tokens and hindering performance on tasks requiring longer descriptions. Although recent work has attempted to overcome this limit, their proposed approaches struggle to model token relationships over longer distances and simply extend to a fixed new token length. Instead, we propose a generalizable method, named TULIP, able to upgrade the token length to any length for CLIP-like models. We do so by improving the architecture with relative position encodings, followed by a training procedure that (i) distills the original CLIP text encoder into an encoder with relative position encodings and (ii) enhances the model for aligning longer captions with images. By effectively encoding captions longer than the default 77 tokens, our model outperforms baselines on cross-modal tasks such as retrieval and text-to-image generation. The code repository is available at https://github.com/ivonajdenkoska/tulip.

Ivona Najdenkoska, Mohammad Mahdi Derakhshani, Yuki M. Asano, Nanne van Noord, Marcel Worring, Cees G. M. Snoek• 2024

Related benchmarks

TaskDatasetResultRank
Text-to-Image RetrievalFlickr30K
R@141.6
531
Image-to-Text RetrievalFlickr30K
R@156.7
429
Text-to-Image RetrievalCOCO
Recall@146.1
156
Image-to-Text RetrievalCOCO
R@162.6
149
Image-to-Text RetrievalDCI
R@166
79
Text-to-Image RetrievalDCI
R@166.2
79
Text-to-Image RetrievalUrban-1K
R@186.6
40
Text-to-Image RetrievalDOCCI
Recall@179.1
38
Image-to-Text RetrievalDOCCI
R@177.9
38
Image-to-Text RetrievalUrban1k
R@190.1
36
Showing 10 of 15 rows

Other info

Follow for update