Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Detecting High-Stakes Interactions with Activation Probes

About

Monitoring is an important aspect of safely deploying Large Language Models (LLMs). This paper examines activation probes for detecting ``high-stakes'' interactions -- where the text indicates that the interaction might lead to significant harm -- as a critical, yet underexplored, target for such monitoring. We evaluate several probe architectures trained on synthetic data, and find them to exhibit robust generalization to diverse, out-of-distribution, real-world data. Probes' performance is comparable to that of prompted or finetuned medium-sized LLM monitors, while offering computational savings of six orders-of-magnitude. These savings are enabled by reusing activations of the model that is being monitored. Our experiments also highlight the potential of building resource-aware hierarchical monitoring systems, where probes serve as an efficient initial filter and flag cases for more expensive downstream analysis. We release our novel synthetic dataset and the codebase at https://github.com/arrrlex/models-under-pressure.

Alex McKenzie, Urja Pawar, Phil Blandfort, William Bankes, David Krueger, Ekdeep Singh Lubana, Dmitrii Krasheninnikov• 2025

Related benchmarks

TaskDatasetResultRank
Concept DetectioniSarcasm (test)
F1 Score79
6
Concept DetectionCLEVR (test)
F1 Score92
6
Concept DetectionCOCO (test)
F1 Score55
6
Concept DetectionOpenSurfaces (test)
F1 Score39
6
Concept DetectionPascal (test)
F1 Score59
6
Concept DetectionSarcasm (test)
F1 Score66
6
Concept DetectionGoEmotions (test)
F1 Score19
6
Concept DetectionCOCO
F1 Score59.1
5
Concept DetectionSurfaces
F1 Score0.479
5
Concept DetectionPascal
F1 Score60.1
5
Showing 10 of 14 rows

Other info

Follow for update