Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fine-Grained Activation Steering: Steering Less, Achieving More

About

Activation steering has emerged as a cost-effective paradigm for modifying large language model (LLM) behaviors. Existing methods typically intervene at the block level, steering the bundled activations of selected attention heads, feedforward networks, or residual streams. However, we reveal that block-level activations are inherently heterogeneous, entangling beneficial, irrelevant, and harmful features, thereby rendering block-level steering coarse, inefficient, and intrusive. To investigate the root cause, we decompose block activations into fine-grained atomic unit (AU)-level activations, where each AU-level activation corresponds to a single dimension of the block activation, and each AU denotes a slice of the block weight matrix. Steering an AU-level activation is thus equivalent to steering its associated AU. Our theoretical and empirical analysis show that heterogeneity arises because different AUs or dimensions control distinct token distributions in LLM outputs. Hence, block-level steering inevitably moves helpful and harmful token directions together, which reduces efficiency. Restricting intervention to beneficial AUs yields more precise and effective steering. Building on this insight, we propose AUSteer, a simple and efficient method that operates at a finer granularity of the AU level. AUSteer first identifies discriminative AUs globally by computing activation momenta on contrastive samples. It then assigns adaptive steering strengths tailored to diverse inputs and selected AU activations. Comprehensive experiments on multiple LLMs and tasks show that AUSteer consistently surpasses advanced baselines while steering considerably fewer activations, demonstrating that steering less achieves more.

Zijian Feng, Tianjiao Li, Zixiao Zhu, Hanzhang Zhou, Junlang Qian, Li Zhang, Jia Jim Deryl Chua, Lee Onn Mak, Gee Wah Ng, Kezhi Mao• 2026

Related benchmarks

TaskDatasetResultRank
Question AnsweringBoolQ
Accuracy90.67
240
Common Sense ReasoningCOPA
Accuracy99.2
138
Commonsense ReasoningCOPA (test)
Accuracy91.2
46
Commonsense ReasoningWinoGrande standard (test)
Accuracy67.25
35
Commonsense ReasoningWinoG
Accuracy79.95
19
Reasoning and Math Problem SolvingBoolQ, COPA, WinoG., SVAMP, MAWPS (test)
BoolQ Accuracy88.2
19
Open generationOpen Generation
Detox Score99.25
16
Commonsense ReasoningBoolQ (test)
Accuracy88.41
9
Open-ended generationOpen-ended generation tasks (Human Evaluation)
Quality Score4.3
7
Showing 9 of 9 rows

Other info

Follow for update