Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Null-LoRA: Low-Rank Adaptation on Null Space

About

Parameter-efficient fine-tuning methods have gained considerable popularity for adapting large-scale models to downstream tasks, particularly LoRA and its variants. Existing methods perform low-rank adaptation over the full parameter space. However, fine-tuning within a subspace can achieve comparable effectiveness. Inspired by the observation that pre-trained models possess non-trivial null spaces, we propose Null-space based Low-Rank Adaptation (Null-LoRA). Null-LoRA effectively reduces redundancy and enhances effective rank by freezing portions of the low-rank matrices. To further improve parameter efficiency, Null-LoRA constrains the entire incremental update within the null space, maximizing the utilization of incremental updates to adapt to new task paradigms. Null-LoRA surpasses the state of the art with fewer parameters in extensive experiments across image-text retrieval and visual question answering tasks.

Yi Zhang, Yulei Kang, Haoxuan Chen, Jinxuan Li, Jian-Fang Hu• 2025

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy77.48
664
Text-to-Image RetrievalFlickr30K
R@186.3
460
Image-to-Text RetrievalMSCOCO
R@180.7
124
Text-to-Image RetrievalMSCOCO
R@162.7
118
Visual Question AnsweringVQAv2 (test-std)
Accuracy77.48
30
Image-to-Text RetrievalFlickr30K
R@196.6
10
Showing 6 of 6 rows

Other info

Follow for update