Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

COSA: Concatenated Sample Pretrained Vision-Language Foundation Model

About

Due to the limited scale and quality of video-text training corpus, most vision-language foundation models employ image-text datasets for pretraining and primarily focus on modeling visually semantic representations while disregarding temporal semantic representations and correlations. To address this issue, we propose COSA, a COncatenated SAmple pretrained vision-language foundation model. COSA jointly models visual contents and event-level temporal cues using only image-text corpora. We achieve this by sequentially concatenating multiple image-text pairs as inputs for pretraining. This transformation effectively converts existing image-text corpora into a pseudo long-form video-paragraph corpus, enabling richer scene transformations and explicit event-description correspondence. Extensive experiments demonstrate that COSA consistently improves performance across a broad range of downstream tasks, including long-form/short-form video-text tasks and image-text tasks such as retrieval, captioning, and question answering. Notably, COSA achieves state-of-the-art results on various competitive benchmarks. Code and model are released at https://github.com/TXH-mercury/COSA.

Sihan Chen, Xingjian He, Handong Li, Xiaojie Jin, Jiashi Feng, Jing Liu• 2023

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringMSRVTT-QA
Accuracy49.2
481
Visual Question AnsweringVQA v2 (test-std)
Accuracy80.54
466
Text-to-Image RetrievalFlickr30K
R@190.2
460
Text-to-Video RetrievalDiDeMo (test)
R@170.5
376
Video Question AnsweringMSVD-QA
Accuracy60
340
Video Question AnsweringActivityNet-QA
Accuracy49.9
319
Text-to-Video RetrievalLSMDC (test)
R@139.4
225
Text-to-Video RetrievalMSRVTT (test)
Recall@10.579
155
Video Question AnsweringTGIF-QA
Accuracy79.5
147
Image-to-Text RetrievalMSCOCO
R@168.5
124
Showing 10 of 19 rows

Other info

Code

Follow for update