Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Explicit Inductive Bias for Transfer Learning with Convolutional Networks

About

In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch. When using fine-tuning, the underlying assumption is that the pre-trained model extracts generic features, which are at least partially relevant for solving the target task, but would be difficult to extract from the limited amount of data available on the target task. However, besides the initialization with the pre-trained model and the early stopping, there is no mechanism in fine-tuning for retaining the features learned on the source task. In this paper, we investigate several regularization schemes that explicitly promote the similarity of the final solution with the initial model. We show the benefit of having an explicit inductive bias towards the initial model, and we eventually recommend a simple $L^2$ penalty with the pre-trained model being a reference as the baseline of penalty for transfer learning tasks.

Xuhong Li, Yves Grandvalet, Franck Davoine• 2018

Related benchmarks

TaskDatasetResultRank
Graph ClassificationPROTEINS
Accuracy65.95
742
Image ClassificationCIFAR-100
Top-1 Accuracy81.43
622
Image ClassificationDTD
Accuracy72.18
487
Image ClassificationCIFAR-10--
471
Image ClassificationDTD
Accuracy69.01
419
Image ClassificationSVHN
Accuracy96.01
359
Image ClassificationStanford Cars (test)
Accuracy94.42
306
Image ClassificationAircraft
Accuracy86.55
302
Image ClassificationCUB-200-2011 (test)
Top-1 Acc82.22
276
Image ClassificationOxford-IIIT Pets
Accuracy89.43
259
Showing 10 of 82 rows
...

Other info

Follow for update