Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VirTex: Learning Visual Representations from Textual Annotations

About

The de-facto approach to many vision tasks is to start from pretrained visual representations, typically learned via supervised training on ImageNet. Recent methods have explored unsupervised pretraining to scale to vast quantities of unlabeled images. In contrast, we aim to learn high-quality visual representations from fewer images. To this end, we revisit supervised pretraining, and seek data-efficient alternatives to classification-based pretraining. We propose VirTex -- a pretraining approach using semantically dense captions to learn visual representations. We train convolutional networks from scratch on COCO Captions, and transfer them to downstream recognition tasks including image classification, object detection, and instance segmentation. On all tasks, VirTex yields features that match or exceed those learned on ImageNet -- supervised or unsupervised -- despite using up to ten times fewer images.

Karan Desai, Justin Johnson• 2020

Related benchmarks

TaskDatasetResultRank
Object DetectionCOCO 2017 (val)--
2454
Instance SegmentationCOCO 2017 (val)--
1144
Image ClassificationImageNet-1k (val)
Top-1 Accuracy52.8
840
Image ClassificationImageNet-1k (val)
Top-1 Acc53.8
706
Semantic segmentationCityscapes
mIoU72.5
578
Image ClassificationImageNet--
429
Depth EstimationNYU Depth V2
RMSE0.613
177
Semantic segmentationPascal VOC
mIoU0.727
172
Text-to-Image RetrievalMSCOCO (1K test)
R@144
104
Image-to-Text RetrievalMSCOCO (1K test)
R@158.1
82
Showing 10 of 17 rows

Other info

Code

Follow for update