Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sycophancy Hides Linearly in the Attention Heads

About

We find that correct-to-incorrect sycophancy signals are most linearly separable within multi-head attention activations. Motivated by the linear representation hypothesis, we train linear probes across the residual stream, multilayer perceptron (MLP), and attention layers to analyze where these signals emerge. Although separability appears in the residual stream and MLPs, steering using these probes is most effective in a sparse subset of middle-layer attention heads. Using TruthfulQA as the base dataset, we find that probes trained on it transfer effectively to other factual QA benchmarks. Furthermore, comparing our discovered direction to previously identified "truthful" directions reveals limited overlap, suggesting that factual accuracy, and deference resistance, arise from related but distinct mechanisms. Attention-pattern analysis further indicates that the influential heads attend disproportionately to expressions of user doubt, contributing to sycophantic shifts. Overall, these findings suggest that sycophancy can be mitigated through simple, targeted linear interventions that exploit the internal geometry of attention activations.

Rifo Genadi, Munachiso Nwadike, Nurdaulet Mukhituly, Hilal Alquabeh, Tatsuya Hiraoka, Kentaro Inui• 2026

Related benchmarks

TaskDatasetResultRank
Sycophancy MitigationTruthfulQA
Sycophancy Rate25
14
Question AnsweringMMLU
Sycophancy Rate44.4
8
Question AnsweringARC Challenge
Sycophancy Rate46.7
8
Showing 3 of 3 rows

Other info

Follow for update