Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Deep Ensembles for Low-Data Transfer Learning

About

In the low-data regime, it is difficult to train good supervised models from scratch. Instead practitioners turn to pre-trained models, leveraging transfer learning. Ensembling is an empirically and theoretically appealing way to construct powerful predictive models, but the predominant approach of training multiple deep networks with different random initialisations collides with the need for transfer via pre-trained weights. In this work, we study different ways of creating ensembles from pre-trained models. We show that the nature of pre-training itself is a performant source of diversity, and propose a practical algorithm that efficiently identifies a subset of pre-trained models for any downstream dataset. The approach is simple: Use nearest-neighbour accuracy to rank pre-trained models, fine-tune the best ones with a small hyperparameter sweep, and greedily construct an ensemble to minimise validation cross-entropy. When evaluated together with strong baselines on 19 different downstream tasks (the Visual Task Adaptation Benchmark), this achieves state-of-the-art performance at a much lower inference budget, even when selecting from over 2,000 pre-trained models. We also assess our ensembles on ImageNet variants and show improved robustness to distribution shift.

Basil Mustafa, Carlos Riquelme, Joan Puigcerver, Andr\'e Susano Pinto, Daniel Keysers, Neil Houlsby• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy88.1
3518
Image ClassificationSUN397 (test)
Top-1 Accuracy67.04
136
Image ClassificationVTAB-1K 1.0 (test)
Natural Accuracy83.6
102
Image ClassificationChestX
Accuracy54.32
9
Image ClassificationClipart
Diversity75.56
6
Showing 5 of 5 rows

Other info

Follow for update