Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ReFT: Representation Finetuning for Language Models

About

Parameter-efficient finetuning (PEFT) methods seek to adapt large neural models via updates to a small number of weights. However, much prior interpretability work has shown that representations encode rich semantic information, suggesting that editing representations might be a more powerful alternative. We pursue this hypothesis by developing a family of Representation Finetuning (ReFT) methods. ReFT methods operate on a frozen base model and learn task-specific interventions on hidden representations. We define a strong instance of the ReFT family, Low-rank Linear Subspace ReFT (LoReFT), and we identify an ablation of this method that trades some performance for increased efficiency. Both are drop-in replacements for existing PEFTs and learn interventions that are 15x--65x more parameter-efficient than LoRA. We showcase LoReFT on eight commonsense reasoning tasks, four arithmetic reasoning tasks, instruction-tuning, and GLUE. In all these evaluations, our ReFTs deliver the best balance of efficiency and performance, and almost always outperform state-of-the-art PEFTs. We release a generic ReFT training library publicly at https://github.com/stanfordnlp/pyreft.

Zhengxuan Wu, Aryaman Arora, Zheng Wang, Atticus Geiger, Dan Jurafsky, Christopher D. Manning, Christopher Potts• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy96.31
1460
Commonsense ReasoningWinoGrande--
776
Mathematical ReasoningGSM8K (test)
Accuracy64.7
751
Natural Language UnderstandingGLUE (dev)
SST-2 (Acc)96.2
504
Code GenerationHumanEval (test)
Pass@168.66
444
Reading ComprehensionRACE high
Accuracy85.33
295
Instruction FollowingAlpacaEval 2.0--
281
Code GenerationMBPP (test)
Pass@154.4
276
Commonsense ReasoningCommon Sense Reasoning Tasks
Avg Score83.3
241
Reading ComprehensionRACE mid
Accuracy88.21
196
Showing 10 of 28 rows

Other info

Follow for update