Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Maximum Roaming Multi-Task Learning

About

Multi-task learning has gained popularity due to the advantages it provides with respect to resource usage and performance. Nonetheless, the joint optimization of parameters with respect to multiple tasks remains an active research topic. Sub-partitioning the parameters between different tasks has proven to be an efficient way to relax the optimization constraints over the shared weights, may the partitions be disjoint or overlapping. However, one drawback of this approach is that it can weaken the inductive bias generally set up by the joint task optimization. In this work, we present a novel way to partition the parameter space without weakening the inductive bias. Specifically, we propose Maximum Roaming, a method inspired by dropout that randomly varies the parameter partitioning, while forcing them to visit as many tasks as possible at a regulated frequency, so that the network fully adapts to each update. We study the properties of our method through experiments on a variety of visual multi-task data sets. Experimental results suggest that the regularization brought by roaming has more impact on performance than usual partitioning optimization strategies. The overall method is flexible, easily applicable, provides superior regularization and consistently achieves improved performances compared to recent multi-task learning formulations.

Lucas Pascal, Pietro Michiardi, Xavier Bost, Benoit Huet, Maria A. Zuluaga• 2020

Related benchmarks

TaskDatasetResultRank
Depth EstimationNYU Depth V2--
177
Facial Attribute ClassificationCelebA--
163
Surface Normal PredictionNYU V2
Mean Error30.58
100
Depth EstimationCityscapes
Abs. Err.0.0143
22
Semantic segmentationNYU V2
mIoU17.4
14
Semantic segmentationCityscapes
mIoU57.93
8
8 grouped facial attributes classificationCelebA
Precision0.712
7
Showing 7 of 7 rows

Other info

Follow for update