Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Less is More: Resource-Efficient Low-Rank Adaptation

About

Low-Rank Adaptation (LoRA) is a widely adopted parameter-efficient fine-tuning (PEFT) method for Large Language Models (LLMs), but it still incurs notable overhead and suffers from parameter interference in complex datasets. While recent works decouple LoRA update matrices to exploit matrix-wise asymmetry, training costs remain high. We revisit LoRA from the perspective of inter-matrix and intra-layer parameter redundancy and propose Resource-Efficient Low-Rank Adaptation, EffiLoRA, a lightweight and generalizable approach for language, multimodal, and diffusion models. EffiLoRA employs a unified A matrix across all transformer layers and introduces a runtime selective B matrices update to dynamically trade-off the system resource budget and model performance. EffiLoRA consistently outperforms LoRA across diverse modalities, including commonsense reasoning, visual instruction tuning, and image generation, demonstrating improved efficiency and robustness.

Chunlin Tian, Xuyang Wei, Huanrong Liu, Zhijiang Guo, Li Li• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationHPSv2 (test)--
18
Commonsense ReasoningCommonsense Reasoning Tasks (ARC-e, OBQA, SIQA, ARC-c, WinoG, PIQA, BoolQ, HellaS) LLaMA3-8B
ARC-e Accuracy92.9
13
Visual Instruction TuningLLaVA Evaluation Suite v1.5
MMBench58.1
5
Text-to-Image Generationpokemon-blip-captions v1 (test)
Quality8.32
3
Showing 4 of 4 rows

Other info

Follow for update