Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Rethinking Fine-Tuning: Unlocking Hidden Capabilities in Vision-Language Models

About

Explorations in fine-tuning Vision-Language Models (VLMs), such as Low-Rank Adaptation (LoRA) from Parameter Efficient Fine-Tuning (PEFT), have made impressive progress. However, most approaches rely on explicit weight updates, overlooking the extensive representational structures already encoded in pre-trained models that remain underutilized. Recent works have demonstrated that Mask Fine-Tuning (MFT) can be a powerful and efficient post-training paradigm for language models. Instead of updating weights, MFT assigns learnable gating scores to each weight, allowing the model to reorganize its internal subnetworks for downstream task adaptation. In this paper, we rethink fine-tuning for VLMs from a structural reparameterization perspective grounded in MFT. We apply MFT to the language and projector components of VLMs with different language backbones and compare against strong PEFT baselines. Experiments show that MFT consistently surpasses LoRA variants and even full fine-tuning, achieving high performance without altering the frozen backbone. Our findings reveal that effective adaptation can emerge not only from updating weights but also from reestablishing connections among the model's existing knowledge. Code available at: https://github.com/Ming-K9/MFT-VLM

Mingyuan Zhang, Yue Bai, Yifan Wang, Yiyang Huang, Yun Fu• 2025

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy88.5
935
Multimodal EvaluationMME
Score1.47e+3
557
Visual Question AnsweringGQA
Accuracy66.5
374
Multi-discipline Multimodal UnderstandingMMMU
Accuracy39.1
266
Scientific Question AnsweringScienceQA image
Accuracy80.4
53
Science Question AnsweringSQA IMG
Score78.1
23
Text-based Visual Question AnsweringTextVQA
Average Score64.2
21
Showing 7 of 7 rows

Other info

Follow for update