Rethinking Fine-Tuning: Unlocking Hidden Capabilities in Vision-Language Models
About
Explorations in fine-tuning Vision-Language Models (VLMs), such as Low-Rank Adaptation (LoRA) from Parameter Efficient Fine-Tuning (PEFT), have made impressive progress. However, most approaches rely on explicit weight updates, overlooking the extensive representational structures already encoded in pre-trained models that remain underutilized. Recent works have demonstrated that Mask Fine-Tuning (MFT) can be a powerful and efficient post-training paradigm for language models. Instead of updating weights, MFT assigns learnable gating scores to each weight, allowing the model to reorganize its internal subnetworks for downstream task adaptation. In this paper, we rethink fine-tuning for VLMs from a structural reparameterization perspective grounded in MFT. We apply MFT to the language and projector components of VLMs with different language backbones and compare against strong PEFT baselines. Experiments show that MFT consistently surpasses LoRA variants and even full fine-tuning, achieving high performance without altering the frozen backbone. Our findings reveal that effective adaptation can emerge not only from updating weights but also from reestablishing connections among the model's existing knowledge. Code available at: https://github.com/Ming-K9/MFT-VLM
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Hallucination Evaluation | POPE | Accuracy88.5 | 935 | |
| Multimodal Evaluation | MME | Score1.47e+3 | 557 | |
| Visual Question Answering | GQA | Accuracy66.5 | 374 | |
| Multi-discipline Multimodal Understanding | MMMU | Accuracy39.1 | 266 | |
| Scientific Question Answering | ScienceQA image | Accuracy80.4 | 53 | |
| Science Question Answering | SQA IMG | Score78.1 | 23 | |
| Text-based Visual Question Answering | TextVQA | Average Score64.2 | 21 |