Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Advancing Parameter Efficiency in Fine-tuning via Representation Editing

About

Parameter Efficient Fine-Tuning (PEFT) techniques have drawn significant attention due to their ability to yield competitive results while updating only a small portion of the adjustable parameters. However, existing PEFT methods pose challenges in hyperparameter selection, such as choosing the rank for LoRA or Adapter, or specifying the length of soft prompts. To address these challenges, we propose a novel fine-tuning approach for neural models, named Representation EDiting (RED), which modifies the representations generated at some layers through the application of scaling and biasing operations. While existing PEFT methods still demonstrate over-parameterization that could potentially undermine the generalization ability acquired from pre-training, RED can substantially reduce the number of trainable parameters by a factor of 25, 700 compared to full parameter fine-tuning and by a factor of 32 relative to LoRA. Remarkably, RED achieves results comparable or superior to both full parameter fine-tuning and other PEFT methods. Extensive experiments across various model architectures and scales, including RoBERTa, GPT-2, T5, and LLaMA-2, have demonstrated the effectiveness and efficiency of RED1, thereby positioning it as a promising PEFT strategy for large-scale neural models.

Muling Wu, Wenhao Liu, Xiaohua Wang, Tianlong Li, Changze Lv, Zixuan Ling, Jianhao Zhu, Cenyuan Zhang, Xiaoqing Zheng, Xuanjing Huang• 2024

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE (dev)
SST-2 (Acc)96.1
504
Natural Language UnderstandingGLUE
SST-296
452
Multi-turn Dialogue EvaluationMT-Bench
Overall Score5.732
331
Instruction FollowingAlpacaEval 2.0--
281
Natural Language UnderstandingGLUE (val)
SST-293
170
Commonsense ReasoningCommonsense Reasoning (BoolQ, PIQA, SIQA, HellaS., WinoG., ARC-e, ARC-c, OBQA) (test)
BoolQ Accuracy72.1
138
Commonsense ReasoningCommonsense Reasoning (BoolQ, PIQA, SIQA, HellaS., WinoG., ARC-e, ARC-c, OBQA)
BoolQ Accuracy70.8
61
Natural Language UnderstandingGLUE (test val)
MRPC Accuracy90.3
59
Natural language generationE2E NLG Challenge
BLEU65.77
58
Language Modeling and ReasoningOpen LLM Leaderboard
ARC72.04
33
Showing 10 of 16 rows

Other info

Code

Follow for update