Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sequential Learning for Domain Generalization

About

In this paper we propose a sequential learning framework for Domain Generalization (DG), the problem of training a model that is robust to domain shift by design. Various DG approaches have been proposed with different motivating intuitions, but they typically optimize for a single step of domain generalization -- training on one set of domains and generalizing to one other. Our sequential learning is inspired by the idea lifelong learning, where accumulated experience means that learning the $n^{th}$ thing becomes easier than the $1^{st}$ thing. In DG this means encountering a sequence of domains and at each step training to maximise performance on the next domain. The performance at domain $n$ then depends on the previous $n-1$ learning problems. Thus backpropagating through the sequence means optimizing performance not just for the next domain, but all following domains. Training on all such sequences of domains provides dramatically more `practice' for a base DG learner compared to existing approaches, thus improving performance on a true testing domain. This strategy can be instantiated for different base DG algorithms, but we focus on its application to the recently proposed Meta-Learning Domain generalization (MLDG). We show that for MLDG it leads to a simple to implement and fast algorithm that provides consistent performance improvement on a variety of DG benchmarks.

Da Li, Yongxin Yang, Yi-Zhe Song, Timothy Hospedales• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationPACS v1 (test)
Average Accuracy81.5
92
object recognitionVLCS
Average Accuracy73.5
31
Action RecognitionIXMAS
Average Accuracy93.1
30
object recognitionPACS (test)
Accuracy (Art)80.6
9
Showing 4 of 4 rows

Other info

Follow for update