Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Enhancing Instruction Following of LLMs via Activation Steering with Dynamic Rejection

About

Large Language Models (LLMs), despite advances in instruction tuning, often fail to follow complex user instructions. Activation steering techniques aim to mitigate this by manipulating model internals, but have a potential risk of oversteering, where excessive emphasis on the instruction degrades task accuracy and overall text quality. To address this, we introduce DIRECTER (Dynamic rejection steering), a novel steering method that dynamically modulates steering strength by scaling the KV cache without extra dataset. DIRECTER couples steering with a plausibility-guided decoding loop, which adaptively adjusts steering strength at each step by comparing the steered output distribution to the original. If the steered output is deemed implausible, steering strength is progressively weakened. This strength modulation is guided by a lightweight, one-time attention sensitivity analysis that ranks layers by their influence on model representations. Extensive evaluations show that DIRECTER significantly enhances instruction-following capabilities across diverse benchmarks, improving accuracy by up to 6.5% over baselines without the common trade-offs in generation quality or task fidelity. The proposed dynamic, plausibility-guided control during activation steering further demonstrates its potential as a general mechanism for mitigating oversteering that is compatible with existing baselines.

Minjae Kang, Jaehyung Kim• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval--
625
Long-context Instruction FollowingLIFBench
List Score64.4
9
Mathematical ReasoningGSM8K-Format
Final Accuracy99.1
9
Refusal ControlSORRY-Bench--
7
Factuality CorrectionAdversarial Factuality
Factuality Correction98.1
4
Instruction Following EvaluationIFEval (random subset of 50 prompts)
Task Fidelity85.9
3
Showing 6 of 6 rows

Other info

Follow for update