Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Toward universal steering and monitoring of AI models

About

Modern AI models contain much of human knowledge, yet understanding of their internal representation of this knowledge remains elusive. Characterizing the structure and properties of this representation will lead to improvements in model capabilities and development of effective safeguards. Building on recent advances in feature learning, we develop an effective, scalable approach for extracting linear representations of general concepts in large-scale AI models (language models, vision-language models, and reasoning models). We show how these representations enable model steering, through which we expose vulnerabilities, mitigate misaligned behaviors, and improve model capabilities. Additionally, we demonstrate that concept representations are remarkably transferable across human languages and combinable to enable multi-concept steering. Through quantitative analysis across hundreds of concepts, we find that newer, larger models are more steerable and steering can improve model capabilities beyond standard prompting. We show how concept representations are effective for monitoring misaligned content (hallucinations, toxic content). We demonstrate that predictive models built using concept representations are more accurate for monitoring misaligned content than using models that judge outputs directly. Together, our results illustrate the power of using internal representations to map the knowledge in AI models, advance AI safety, and improve model capabilities.

Daniel Beaglehole, Adityanarayanan Radhakrishnan, Enric Boix-Adser\`a, Mikhail Belkin• 2025

Related benchmarks

TaskDatasetResultRank
Classification ProbingCities (test)
Probe Accuracy (Best Layer)100
21
Classification ProbingCommon (test)
Probe Accuracy (Best Layer)76.9
21
Classification ProbingHateXplain (test)
Probe Accuracy (Best Layer)79.1
21
Classification ProbingSarcasm (test)
Probe Acc (Best Layer)96.3
21
Classification ProbingSTSA (test)
Probe Accuracy (Best Layer)0.955
21
Classification ProbingCounterFact (test)
Probe Acc (Best Layer)88.4
21
Concept vector stabilitySTSA
Mean Absolute-Cosine Similarity0.83
9
Concept vector stabilitySarcasm
Mean Absolute-Cosine Similarity0.82
6
Concept vector stabilityHateXplain
Mean Abs-Cosine Similarity0.73
3
Showing 9 of 9 rows

Other info

Follow for update