Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SafeRoute: Adaptive Model Selection for Efficient and Accurate Safety Guardrails in Large Language Models

About

Deploying large language models (LLMs) in real-world applications requires robust safety guard models to detect and block harmful user prompts. While large safety guard models achieve strong performance, their computational cost is substantial. To mitigate this, smaller distilled models are used, but they often underperform on "hard" examples where the larger model provides accurate predictions. We observe that many inputs can be reliably handled by the smaller model, while only a small fraction require the larger model's capacity. Motivated by this, we propose SafeRoute, a binary router that distinguishes hard examples from easy ones. Our method selectively applies the larger safety guard model to the data that the router considers hard, improving efficiency while maintaining accuracy compared to solely using the larger safety guard model. Experimental results on multiple benchmark datasets demonstrate that our adaptive model selection significantly enhances the trade-off between computational cost and safety performance, outperforming relevant baselines.

Seanie Lee, Dong Bok Lee, Dominik Wagner, Minki Kang, Haebin Seong, Tobias Bocklet, Juho Lee, Sung Ju Hwang• 2025

Related benchmarks

TaskDatasetResultRank
Safety ClassificationWildGuardMix (test)--
27
Safety ClassificationXSTest (test)
F189.2
20
Unsafe Prompt DetectionToxicChat (test)
Precision0.395
16
Prompt-only Safety RoutingToxicChat
Routing F156.82
10
Prompt-Response Safety RoutingXSTest
Routing F156.21
10
Prompt-Response Safety RoutingHarmBench
Routing F155.92
10
Safety ClassificationWildGuardMix-p (test)
F1 Score84.8
9
Safety ClassificationHarmBench (test)
F1 Score83.4
9
Safety ClassificationOAI (test)
F1 Score71.5
9
Prompt-only Safety RoutingWildGuardMix-p
Routing F1 Score61.28
5
Showing 10 of 15 rows

Other info

Follow for update