Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Merging Multi-Task Models via Weight-Ensembling Mixture of Experts

About

Merging various task-specific Transformer-based models trained on different tasks into a single unified model can execute all the tasks concurrently. Previous methods, exemplified by task arithmetic, have been proven to be both effective and scalable. Existing methods have primarily focused on seeking a static optimal solution within the original model parameter space. A notable challenge is mitigating the interference between parameters of different models, which can substantially deteriorate performance. In this paper, we propose to merge most of the parameters while upscaling the MLP of the Transformer layers to a weight-ensembling mixture of experts (MoE) module, which can dynamically integrate shared and task-specific knowledge based on the input, thereby providing a more flexible solution that can adapt to the specific needs of each instance. Our key insight is that by identifying and separating shared knowledge and task-specific knowledge, and then dynamically integrating them, we can mitigate the parameter interference problem to a great extent. We conduct the conventional multi-task model merging experiments and evaluate the generalization and robustness of our method. The results demonstrate the effectiveness of our method and provide a comprehensive understanding of our method. The code is available at https://github.com/tanganke/weight-ensembling_MoE

Anke Tang, Li Shen, Yong Luo, Nan Yin, Lefei Zhang, Dacheng Tao• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationVision Multi-task Suite (SUN397, Cars, RESISC45, EuroSAT, SVHN, GTSRB, MNIST, DTD)
Average Accuracy93.6
72
Image ClassificationSUN397, Cars, EuroSAT, GTSRB, MNIST, DTD Seen Tasks (test)
SUN397 Accuracy0.8164
34
Image ClassificationRESISC45, SVHN Unseen Tasks (test)
RESISC45 Accuracy61.36
34
Visual Classification8 Vision Tasks (SUN397, Stanford Cars, RESISC45, EuroSAT, SVHN, GTSRB, MNIST, DTD)
SUN397 Accuracy73.92
20
Natural Language UnderstandingGLUE
CoLA72.3
14
Showing 5 of 5 rows

Other info

Follow for update