Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks

About

Deep multitask networks, in which one neural network produces multiple predictive outputs, can offer better speed and performance than their single-task counterparts but are challenging to train properly. We present a gradient normalization (GradNorm) algorithm that automatically balances training in deep multitask models by dynamically tuning gradient magnitudes. We show that for various network architectures, for both regression and classification tasks, and on both synthetic and real datasets, GradNorm improves accuracy and reduces overfitting across multiple tasks when compared to single-task networks, static baselines, and other adaptive multitask loss balancing techniques. GradNorm also matches or surpasses the performance of exhaustive grid search methods, despite only involving a single asymmetry hyperparameter $\alpha$. Thus, what was once a tedious search process that incurred exponentially more compute for each task added can now be accomplished within a few training runs, irrespective of the number of tasks. Ultimately, we will demonstrate that gradient manipulation affords us great control over the training dynamics of multitask networks and may be one of the keys to unlocking the potential of multitask learning.

Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, Andrew Rabinovich• 2017

Related benchmarks

TaskDatasetResultRank
Semantic segmentationCityscapes
mIoU64.81
658
Depth EstimationNYU v2 (test)--
432
Image ClassificationCUB
Accuracy86
282
Semantic segmentationNYU v2 (test)
mIoU52.25
282
Surface Normal EstimationNYU v2 (test)
Mean Angle Distance (MAD)23.86
224
Depth EstimationNYU Depth V2--
209
Image ClassificationOffice-Home (test)--
199
ClassificationCelebA
Avg Accuracy84.8
185
Facial Attribute ClassificationCelebA--
163
Semantic segmentationNYUD v2
mIoU37.19
125
Showing 10 of 55 rows

Other info

Follow for update