Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Memory-Efficient Transfer Learning with Fading Side Networks via Masked Dual Path Distillation

About

Memory-efficient transfer learning (METL) approaches have recently achieved promising performance in adapting pre-trained models to downstream tasks. They avoid applying gradient backpropagation in large backbones, thus significantly reducing the number of trainable parameters and high memory consumption during fine-tuning. However, since they typically employ a lightweight and learnable side network, these methods inevitably introduce additional memory and time overhead during inference, which contradicts the ultimate goal of efficient transfer learning. To address the above issue, we propose a novel approach dubbed Masked Dual Path Distillation (MDPD) to accelerate inference while retaining parameter and memory efficiency in fine-tuning with fading side networks. Specifically, MDPD develops a framework that enhances the performance by mutually distilling the frozen backbones and learnable side networks in fine-tuning, and discard the side network during inference without sacrificing accuracy. Moreover, we design a novel feature-based knowledge distillation method for the encoder structure with multiple layers. Extensive experiments on distinct backbones across vision/language-only and vision-and-language tasks demonstrate that our method not only accelerates inference by at least 25.2\% while keeping parameter and memory consumption comparable, but also remarkably promotes the accuracy compared to SOTA approaches. The source code is available at https://github.com/Zhang-VKk/MDPD.

Yutong Zhang, Jiaxin Chen, Honglin Chen, Kaiqi Zheng, Shengcai Liao, Hanwen Zhong, Weixin Li, Yunhong Wang• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy75.88
706
Natural Language UnderstandingGLUE
SST-296.5
531
Visual Question AnsweringVQA v2 (test-std)
Accuracy76.07
486
Image ClassificationVTAB 1K
Overall Mean Accuracy78.3
258
Visual GroundingRefCOCO+ (val)
Accuracy74.05
212
Visual GroundingRefCOCO+ (testA)
Accuracy80.46
206
Visual Question AnsweringGQA (test-dev)
Accuracy60.41
184
Visual GroundingRefCOCO+ (testB)
Accuracy64.79
180
Visual GroundingRefCOCO (val)
Accuracy83.11
147
Visual GroundingRefCOCO (testB)
Accuracy78.97
138
Showing 10 of 20 rows

Other info

Follow for update