Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Joint Localization and Activation Editing for Low-Resource Fine-Tuning

About

Parameter-efficient fine-tuning (PEFT) methods, such as LoRA, are commonly used to adapt LLMs. However, the effectiveness of standard PEFT methods is limited in low-resource scenarios with only a few hundred examples. Recent advances in interpretability research have inspired the emergence of activation editing (or steering) techniques, which modify the activations of specific model components. Due to their extremely small parameter counts, these methods show promise for small datasets. However, their performance is highly dependent on identifying the correct modules to edit and often lacks stability across different datasets. In this paper, we propose Joint Localization and Activation Editing (JoLA), a method that jointly learns (1) which heads in the Transformer to edit (2) whether the intervention should be additive, multiplicative, or both and (3) the intervention parameters themselves - the vectors applied as additive offsets or multiplicative scalings to the head output. Through evaluations on three benchmarks spanning commonsense reasoning, natural language understanding, and natural language generation, we demonstrate that JoLA consistently outperforms existing methods. The code for the method is released at https://github.com/wenlai-lavine/jola.

Wen Lai, Alexander Fraser, Ivan Titov• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningWinoGrande
Accuracy75.7
1085
Question AnsweringARC Challenge
Accuracy88.1
906
Mathematical ReasoningGSM8K
Accuracy38.9
499
Logical reasoningListOps
Accuracy64.5
32
Language ModelingNLP Benchmark Suite Aggregate
Average Delta-8.9
16
Question AnsweringBoolQ
Accuracy90.5
16
Showing 6 of 6 rows

Other info

Follow for update