Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Scaling Language-Free Visual Representation Learning

About

Visual Self-Supervised Learning (SSL) currently underperforms Contrastive Language-Image Pretraining (CLIP) in multimodal settings such as Visual Question Answering (VQA). This multimodal gap is often attributed to the semantics introduced by language supervision, even though visual SSL and CLIP models are often trained on different data. In this work, we ask the question: "Do visual self-supervised approaches lag behind CLIP due to the lack of language supervision, or differences in the training data?" We study this question by training both visual SSL and CLIP models on the same MetaCLIP data, and leveraging VQA as a diverse testbed for vision encoders. In this controlled setup, visual SSL models scale better than CLIP models in terms of data and model capacity, and visual SSL performance does not saturate even after scaling up to 7B parameters. Consequently, we observe visual SSL methods achieve CLIP-level performance on a wide range of VQA and classic vision benchmarks. These findings demonstrate that pure visual SSL can match language-supervised visual pretraining at scale, opening new opportunities for vision-centric representation learning.

David Fan, Shengbang Tong, Jiachen Zhu, Koustuv Sinha, Zhuang Liu, Xinlei Chen, Michael Rabbat, Nicolas Ballas, Yann LeCun, Amir Bar, Saining Xie• 2025

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU42.7
2888
Visual Question AnsweringTextVQA
Accuracy40.6
1285
Semantic segmentationCityscapes
mIoU68.3
658
Visual Question AnsweringChartQA--
371
Semantic segmentationADE20K
mIoU42.7
366
Semantic segmentationPASCAL VOC (val)
mIoU76.1
362
Document Visual Question AnsweringDocVQA
ANLS55.1
263
Visual Question AnsweringAI2D
Accuracy63.8
249
Optical Character RecognitionOCRBench--
232
Semantic segmentationPASCAL VOC 2012
mIoU76.1
218
Showing 10 of 27 rows

Other info

Follow for update