Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Efficient Vision-Language Pre-training by Cluster Masking

About

We propose a simple strategy for masking image patches during visual-language contrastive learning that improves the quality of the learned representations and the training speed. During each iteration of training, we randomly mask clusters of visually similar image patches, as measured by their raw pixel intensities. This provides an extra learning signal, beyond the contrastive training itself, since it forces a model to predict words for masked visual structures solely from context. It also speeds up training by reducing the amount of data used in each image. We evaluate the effectiveness of our model by pre-training on a number of benchmarks, finding that it outperforms other masking strategies, such as FLIP, on the quality of the learned representation.

Zihao Wei, Zixuan Pan, Andrew Owens• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1K
Top-1 Acc62.7
1239
Image ClassificationEuroSAT--
569
Image ClassificationDTD--
542
Text-to-Image RetrievalFlickr30K
R@157.6
531
Image ClassificationCIFAR-10
Accuracy89
507
Image-to-Text RetrievalFlickr30K
R@143.3
429
Image ClassificationSUN397--
425
ClassificationCars
Accuracy15.1
395
Image ClassificationRESISC45--
349
Image ClassificationGTSRB
Accuracy9.6
291
Showing 10 of 46 rows

Other info

Follow for update