Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Bayesian Uncertainty for Gradient Aggregation in Multi-Task Learning

About

As machine learning becomes more prominent there is a growing demand to perform several inference tasks in parallel. Running a dedicated model for each task is computationally expensive and therefore there is a great interest in multi-task learning (MTL). MTL aims at learning a single model that solves several tasks efficiently. Optimizing MTL models is often achieved by computing a single gradient per task and aggregating them for obtaining a combined update direction. However, these approaches do not consider an important aspect, the sensitivity in the gradient dimensions. Here, we introduce a novel gradient aggregation approach using Bayesian inference. We place a probability distribution over the task-specific parameters, which in turn induce a distribution over the gradients of the tasks. This additional valuable information allows us to quantify the uncertainty in each of the gradients dimensions, which can then be factored in when aggregating them. We empirically demonstrate the benefits of our approach in a variety of datasets, achieving state-of-the-art performance.

Idan Achituve, Idit Diamant, Arnon Netzer, Gal Chechik, Ethan Fetaya• 2024

Related benchmarks

TaskDatasetResultRank
Multi-Label ClassificationChestX-Ray14 (test)--
88
Multi-task LearningUTKFace (test)
Age MAE0.135
14
Multi-task Binary ClassificationCIFAR-MTL (test)
Accuracy59.97
13
Multi-task RegressionQM9 (test)
Δm%53.2
13
Showing 4 of 4 rows

Other info

Code

Follow for update