Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs

About

Multi-modal language-vision models trained on hundreds of millions of image-text pairs (e.g. CLIP, DALL-E) gained a recent surge, showing remarkable capability to perform zero- or few-shot learning and transfer even in absence of per-sample labels on target image data. Despite this trend, to date there has been no publicly available datasets of sufficient scale for training such models from scratch. To address this issue, in a community effort we build and release for public LAION-400M, a dataset with CLIP-filtered 400 million image-text pairs, their CLIP embeddings and kNN indices that allow efficient similarity search.

Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, Aran Komatsuzaki• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet V2--
611
Image ClassificationUCF101
Top-1 Acc71.6
455
Text-to-Image RetrievalFlickr30k (test)
Recall@170.2
445
Image ClassificationImageNet--
431
ClassificationCars
Accuracy89.6
395
Image-to-Text RetrievalFlickr30k (test)
R@187.6
392
Image ClassificationCUB
Accuracy71.4
282
Image Classification11 Downstream Classification Datasets (ImageNet, Flowers102, DTD, OxfordPets, StanfordCars, UCF101, Caltech101, Food101, SUN397, FGVC-Aircraft, EuroSAT) standard (test)
DTD Accuracy43.1
115
Image ClassificationCaltech
Accuracy92.5
101
ClassificationCUB
Accuracy71.4
93
Showing 10 of 25 rows

Other info

Follow for update