Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Delve into the Applicability of Advanced Optimizers for Multi-Task Learning

About

Multi-Task Learning (MTL) is a foundational machine learning problem that has seen extensive development over the past decade. Recently, various optimization-based MTL approaches have been proposed to learn multiple tasks simultaneously by altering the optimization trajectory. Although these methods strive to de-conflict and re-balance tasks, we empirically identify that their effectiveness is often undermined by an overlooked factor when employing advanced optimizers: the instant-derived gradients play only a marginal role in the actual parameter updates. This discrepancy prevents MTL frameworks from fully releasing its power on learning dynamics. Furthermore, we observe that Muon-a recently emerged advanced optimizer-inherently functions as a multi-task learner, which underscores the critical importance of the gradients used for its orthogonalization. To address these issues, we propose APT (Applicability of advanced oPTimizers), a framework featuring a simple adaptive momentum mechanism designed to balance the strengths between advanced optimizers and MTL. Additionally, we introduce a light direction preservation method to facilitate Muon's orthogonalization. Extensive experiments across four mainstream MTL datasets demonstrate that APT consistently augments existing MTL approaches, yielding substantial performance improvements.

Zhipeng Zhou, Linxiao Cao, Pengcheng Wu, Peilin Zhao, Chunyan Miao• 2026

Related benchmarks

TaskDatasetResultRank
Scene UnderstandingCityscapes
mIoU78.23
18
Scene UnderstandingNYU V2
mIoU42.47
18
Multi-task LearningCelebA
Misclassification Rate (MR)6.75
17
Multi-task LearningQM9
Mean Relative Error (MR)4.36
17
Showing 4 of 4 rows

Other info

Follow for update