Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Continual Adaptation of Visual Representations via Domain Randomization and Meta-learning

About

Most standard learning approaches lead to fragile models which are prone to drift when sequentially trained on samples of a different nature - the well-known "catastrophic forgetting" issue. In particular, when a model consecutively learns from different visual domains, it tends to forget the past domains in favor of the most recent ones. In this context, we show that one way to learn models that are inherently more robust against forgetting is domain randomization - for vision tasks, randomizing the current domain's distribution with heavy image manipulations. Building on this result, we devise a meta-learning strategy where a regularizer explicitly penalizes any loss associated with transferring the model from the current domain to different "auxiliary" meta-domains, while also easing adaptation to them. Such meta-domains are also generated through randomized image manipulations. We empirically demonstrate in a variety of experiments - spanning from classification to semantic segmentation - that our approach results in models that are less prone to catastrophic forgetting when transferred to new domains.

Riccardo Volpi, Diane Larlus, Gr\'egory Rogez• 2020

Related benchmarks

TaskDatasetResultRank
Image RecognitioniDigits NC v1
Avg Incremental Acc89.89
20
Image RecognitioniDomainNet v1 (NC)
Average Incremental Accuracy51.64
20
Image RecognitioniCIFAR-20 v1 (NC)
Avg Incremental Accuracy78.98
10
Image RecognitioniDigits v1 (ND)
Avg Incremental Acc94
10
Image RecognitioniDomainNet ND v1
Incremental Accuracy48.73
10
Image RecognitioniCIFAR-20 NCD v1
Avg Incremental Acc70.9
10
Image RecognitioniCIFAR-20 ND v1
Average Incremental Accuracy73.06
10
Showing 7 of 7 rows

Other info

Follow for update