Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Multi-Task Learning as a Bargaining Game

About

In Multi-task learning (MTL), a joint model is trained to simultaneously make predictions for several tasks. Joint training reduces computation costs and improves data efficiency; however, since the gradients of these different tasks may conflict, training a joint model for MTL often yields lower performance than its corresponding single-task counterparts. A common method for alleviating this issue is to combine per-task gradients into a joint update direction using a particular heuristic. In this paper, we propose viewing the gradients combination step as a bargaining game, where tasks negotiate to reach an agreement on a joint direction of parameter update. Under certain assumptions, the bargaining problem has a unique solution, known as the Nash Bargaining Solution, which we propose to use as a principled approach to multi-task learning. We describe a new MTL optimization procedure, Nash-MTL, and derive theoretical guarantees for its convergence. Empirically, we show that Nash-MTL achieves state-of-the-art results on multiple MTL benchmarks in various domains.

Aviv Navon, Aviv Shamsian, Idan Achituve, Haggai Maron, Kenji Kawaguchi, Gal Chechik, Ethan Fetaya• 2022

Related benchmarks

TaskDatasetResultRank
Semantic segmentationCityscapes (test)
mIoU75.41
1154
Semantic segmentationCityscapes
mIoU75.41
658
Depth EstimationNYU v2 (test)--
432
Semantic segmentationNYU v2 (test)
mIoU51.73
282
Surface Normal EstimationNYU v2 (test)
Mean Angle Distance (MAD)23.21
224
Depth EstimationNYU Depth V2
RMSE0.78
209
Semantic segmentationNYU Depth V2 (test)
mIoU40.13
183
Semantic segmentationNYUD v2
mIoU31.32
125
Surface Normal PredictionNYU V2
Mean Error25.26
118
Multi-Label ClassificationChestX-Ray14 (test)--
88
Showing 10 of 46 rows

Other info

Code

Follow for update