Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DynaGuard: A Dynamic Guardian Model With User-Defined Policies

About

Guardian models play a crucial role in ensuring the safety and ethical behavior of user-facing AI applications by enforcing guardrails and detecting harmful content. While standard guardian models are limited to predefined, static harm categories, we introduce DynaGuard, a suite of dynamic guardian models offering novel flexibility by evaluating text based on user-defined policies, and DynaBench, a dataset for training and evaluating dynamic guardian models. Our models provide both rapid detection of policy violations and a chain-of-thought reasoning option that articulate and justify model outputs. Critically, DynaGuard not only surpasses static models in detection accuracy on traditional safety categories, but is competitive with frontier reasoning models on free-form policy violations, all in a fraction of the time. This makes DynaGuard an critical tool for language model guardrails.

Monte Hoover, Vatsal Baherwani, Neel Jain, Khalid Saifullah, Joseph Vincent, Chirag Jain, Melissa Kazemi Rad, C. Bayan Bruss, Ashwinee Panda, Tom Goldstein• 2025

Related benchmarks

TaskDatasetResultRank
Response ClassificationBeaverTails V Text-Image Response
F1 Score81.73
23
Response ClassificationWild Guard Text Response
F1 Score93.17
16
Response ClassificationAegis Text Response 2.0
F1 Score80.34
16
Response ClassificationXSTest Text Response
F1 Score95.62
16
Showing 4 of 4 rows

Other info

Follow for update