Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Qwen3Guard Technical Report

About

As large language models (LLMs) become more capable and widely used, ensuring the safety of their outputs is increasingly critical. Existing guardrail models, though useful in static evaluation settings, face two major limitations in real-world applications: (1) they typically output only binary "safe/unsafe" labels, which can be interpreted inconsistently across diverse safety policies, rendering them incapable of accommodating varying safety tolerances across domains; and (2) they require complete model outputs before performing safety checks, making them fundamentally incompatible with streaming LLM inference, thereby preventing timely intervention during generation and increasing exposure to harmful partial outputs. To address these challenges, we present Qwen3Guard, a series of multilingual safety guardrail models with two specialized variants: Generative Qwen3Guard, which casts safety classification as an instruction-following task to enable fine-grained tri-class judgments (safe, controversial, unsafe); and Stream Qwen3Guard, which introduces a token-level classification head for real-time safety monitoring during incremental text generation. Both variants are available in three sizes (0.6B, 4B, and 8B parameters) and support up to 119 languages and dialects, providing comprehensive, scalable, and low-latency safety moderation for global LLM deployments. Evaluated across English, Chinese, and multilingual benchmarks, Qwen3Guard achieves state-of-the-art performance in both prompt and response safety classification. All models are released under the Apache 2.0 license for public use.

Haiquan Zhao, Chenhan Yuan, Fei Huang, Xiaomeng Hu, Yichang Zhang, An Yang, Bowen Yu, Dayiheng Liu, Jingren Zhou, Junyang Lin, Baosong Yang, Chen Cheng, Jialong Tang, Jiandong Jiang, Jianwei Zhang, Jijie Xu, Ming Yan, Minmin Sun, Pei Zhang, Pengjun Xie, Qiaoyu Tang, Qin Zhu, Rong Zhang, Shibin Wu, Shuo Zhang, Tao He, Tianyi Tang, Tingyu Xia, Wei Liao, Weizhou Shen, Wenbiao Yin, Wenmeng Zhou, Wenyuan Yu, Xiaobin Wang, Xiaodong Deng, Xiaodong Xu, Xinyu Zhang, Yang Liu, Yeqiu Li, Yi Zhang, Yong Jiang, Yu Wan, Yuxin Zhou• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAIME
AIME Accuracy51.3
283
Graduate-level Question AnsweringGPQA
Accuracy63.1
114
Question AnsweringMMLU-Pro
Accuracy87.2
56
Response Harmfulness DetectionXSTEST-RESP
Response Harmfulness F193.67
34
Safety ClassificationSafeRLHF
F1 Score0.7054
32
Response Harmfulness ClassificationWildGuard (test)
F1 (Total)78.83
30
Prompt ClassificationSEA-SafeguardBench
AUPRC (Average)91
29
Response Harmfulness DetectionHarmBench
F1 Score86.91
23
Step-level tool invocation safety detectionAgentHarm Traj
Accuracy80.57
20
Step-level tool invocation safety detectionASB-Traj
Accuracy0.7989
20
Showing 10 of 49 rows

Other info

Follow for update