Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Domain Generalization via Gradient Surgery

About

In real-life applications, machine learning models often face scenarios where there is a change in data distribution between training and test domains. When the aim is to make predictions on distributions different from those seen at training, we incur in a domain generalization problem. Methods to address this issue learn a model using data from multiple source domains, and then apply this model to the unseen target domain. Our hypothesis is that when training with multiple domains, conflicting gradients within each mini-batch contain information specific to the individual domains which is irrelevant to the others, including the test domain. If left untouched, such disagreement may degrade generalization performance. In this work, we characterize the conflicting gradients emerging in domain shift scenarios and devise novel gradient agreement strategies based on gradient surgery to alleviate their effect. We validate our approach in image classification tasks with three multi-domain datasets, showing the value of the proposed agreement strategy in enhancing the generalization capability of deep learning models in domain shift scenarios.

Lucas Mansilla, Rodrigo Echeveste, Diego H. Milone, Enzo Ferrante• 2021

Related benchmarks

TaskDatasetResultRank
Object CountingFSC-147 (test)
MAE29.83
297
Crowd CountingShanghaiTech Part A (test)
MAE130.7
227
Crowd CountingShanghaiTech Part B (test)
MAE17.3
191
Crowd CountingUCF-QNRF (Q) (test)
MAE129.1
31
Deformable RegistrationACDC (test)
Dice63.4
25
Object CountingFSCD-LVIS (test)
MAE30.55
21
Primitive FittingPrimitive Fitting (Out-of-distribution)
mIoU71.6
14
Crowd CountingShanghaiTech-A -> UCF-QNRF (test)
MAE129.1
13
Primitive FittingPrimitive Fitting (In-distribution)
mIoU92.6
12
Deformable RegistrationM&M (test)
Dice63.41
10
Showing 10 of 10 rows

Other info

Follow for update