Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Activation Steering with a Feedback Controller

About

Controlling the behaviors of large language models (LLM) is fundamental to their safety alignment and reliable deployment. However, existing steering methods are primarily driven by empirical insights and lack theoretical performance guarantees. In this work, we develop a control-theoretic foundation for activation steering by showing that popular steering methods correspond to the proportional (P) controllers, with the steering vector serving as the feedback signal. Building on this finding, we propose Proportional-Integral-Derivative (PID) Steering, a principled framework that leverages the full PID controller for activation steering in LLMs. The proportional (P) term aligns activations with target semantic directions, the integral (I) term accumulates errors to enforce persistent corrections across layers, and the derivative (D) term mitigates overshoot by counteracting rapid activation changes. This closed-loop design yields interpretable error dynamics and connects activation steering to classical stability guarantees in control theory. Moreover, PID Steering is lightweight, modular, and readily integrates with state-of-the-art steering methods. Extensive experiments across multiple LLM families and benchmarks demonstrate that PID Steering consistently outperforms existing approaches, achieving more robust and reliable behavioral control.

Dung V. Nguyen, Hieu M. Vu, Nhi Y. Pham, Lei Zhang, Tan M. Nguyen• 2025

Related benchmarks

TaskDatasetResultRank
Language UnderstandingMMLU 5-shot--
132
General Language UnderstandingtinyBenchmark
Accuracy (ARC)72.93
81
Activation SteeringASR (evaluation set)
ASR Accuracy94.85
20
Toxicity MitigationRealToxicityPrompts 1k samples
CLS Toxicity0.51
20
Language ModelingWikipedia 20k sentences
Perplexity (Wikipedia 20k)9.56
20
Language ModelingModel-generated outputs
PPL (Mistral-7B)6.08
20
Showing 6 of 6 rows

Other info

Follow for update