Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adaptive LoRA Merge with Parameter Pruning for Low-Resource Generation

About

This study proposes a simple yet effective LoRA merge method to achieve LLM adaptation for low-resource language generation tasks. The LoRA merge technique, which integrates multiple LoRA modules trained on different tasks, has gained attention as an effective and efficient approach for adapting LLMs to target tasks. However, previous methods are limited in adaptability as they keep the LoRA parameters frozen. Additionally, the low-resource problem has been out of their scope. We propose a LoRA merge method that updates and prunes LoRA parameters through fine-tuning with minimal target task data, which allows finer-grained adjustments of LoRA parameters and enhancement of task adaptability. Extensive experiments have been conducted taking summarization as a benchmark task. Our datasets cover various domains and multiple languages of English and Japanese. The results confirm that the proposed method achieves significant and consistent improvements in task adaptability over the previous methods.

Ryota Miyano, Yuki Arase• 2025

Related benchmarks

TaskDatasetResultRank
SummarizationMIMIC-III (test)
RL Score29.13
10
SummarizationSciTLDR (test)
RL Score35.99
10
SummarizationBloomberg (test)
RL Score33.12
10
SummarizationNLP Paper (test)
BLEU23.28
10
SummarizationMedical Paper (test)
BLEU34.04
10
SummarizationMIMIC-III
BERTScore0.769
10
SummarizationSciTLDR
BERTScore0.783
10
SummarizationBloomberg
BERTScore0.757
10
SummarizationNLP Paper
BERTScore83.8
10
SummarizationMedical Paper
BERTScore0.857
10
Showing 10 of 14 rows

Other info

Code

Follow for update