Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Eliciting Language Model Behaviors with Investigator Agents

About

Language models exhibit complex, diverse behaviors when prompted with free-form text, making it difficult to characterize the space of possible outputs. We study the problem of behavior elicitation, where the goal is to search for prompts that induce specific target behaviors (e.g., hallucinations or harmful responses) from a target language model. To navigate the exponentially large space of possible prompts, we train investigator models to map randomly-chosen target behaviors to a diverse distribution of outputs that elicit them, similar to amortized Bayesian inference. We do this through supervised fine-tuning, reinforcement learning via DPO, and a novel Frank-Wolfe training objective to iteratively discover diverse prompting strategies. Our investigator models surface a variety of effective and human-interpretable prompts leading to jailbreaks, hallucinations, and open-ended aberrant behaviors, obtaining a 100% attack success rate on a subset of AdvBench (Harmful Behaviors) and an 85% hallucination rate.

Xiang Lisa Li, Neil Chowdhury, Daniel D. Johnson, Tatsunori Hashimoto, Percy Liang, Sarah Schwettmann, Jacob Steinhardt• 2025

Related benchmarks

TaskDatasetResultRank
JailbreakingAdvBench
ASR18.4
44
self-affirmationMT-Bench 101
Success Rate0.233
25
inference memoryinference memory
Success Rate0.00e+0
25
Showing 3 of 3 rows

Other info

Follow for update