Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HyDRA: Hierarchical and Dynamic Rank Adaptation for Mobile Vision Language Model

About

Vision Language Models (VLMs) have undergone significant advancements, particularly with the emergence of mobile-oriented VLMs, which offer a wide range of application scenarios. However, the substantial computational requirements for training these models present a significant obstacle to their practical application. To address this issue, Low-Rank Adaptation (LoRA) has been proposed. Nevertheless, the standard LoRA with a fixed rank lacks sufficient capability for training mobile VLMs that process both text and image modalities. In this work, we introduce HyDRA, a parameter-efficient fine-tuning framework designed to implement hierarchical and dynamic rank scheduling for mobile VLMs. This framework incorporates two essential optimization strategies: (1) hierarchical optimization, which involves a coarse-grained approach that assigns different ranks to various layers, as well as a fine-grained method that adjusts ranks within individual layers, and (2) dynamic adjustment, which employs an end-to-end automatic optimization using a lightweight performance model to determine and adjust ranks during the fine-tuning process. Comprehensive experiments conducted on popular benchmarks demonstrate that HyDRA consistently outperforms the baseline, achieving a 4.7\% improvement across various model sizes without increasing the number of trainable parameters. In some tasks, it even surpasses full-parameter fine-tuning.

Yuanhao Xi, Xiaohuan Bing, Ramin Yahyapour• 2025

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy84.66
935
Multimodal EvaluationMME--
557
Visual Question AnsweringGQA
Accuracy58.58
374
Multimodal UnderstandingMMBench--
367
Science Question AnsweringScienceQA SQA-I
Accuracy57.39
81
Text-based Visual Question AnsweringTextVQA (VQA^T)
Accuracy47.14
65
Showing 6 of 6 rows

Other info

Follow for update