Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TRIDENT: Enhancing Large Language Model Safety with Tri-Dimensional Diversified Red-Teaming Data Synthesis

About

Large Language Models (LLMs) excel in various natural language processing tasks but remain vulnerable to generating harmful content or being exploited for malicious purposes. Although safety alignment datasets have been introduced to mitigate such risks through supervised fine-tuning (SFT), these datasets often lack comprehensive risk coverage. Most existing datasets focus primarily on lexical diversity while neglecting other critical dimensions. To address this limitation, we propose a novel analysis framework to systematically measure the risk coverage of alignment datasets across three essential dimensions: Lexical Diversity, Malicious Intent, and Jailbreak Tactics. We further introduce TRIDENT, an automated pipeline that leverages persona-based, zero-shot LLM generation to produce diverse and comprehensive instructions spanning these dimensions. Each harmful instruction is paired with an ethically aligned response, resulting in two datasets: TRIDENT-Core, comprising 26,311 examples, and TRIDENT-Edge, with 18,773 examples. Fine-tuning Llama 3.1-8B on TRIDENT-Edge demonstrates substantial improvements, achieving an average 14.29% reduction in Harm Score, and a 20% decrease in Attack Success Rate compared to the best-performing baseline model fine-tuned on the WildBreak dataset.

Xiaorui Wu, Xiaofeng Mao, Fei Li, Xin Zhang, Xuanhong Li, Chong Teng, Donghong Ji, Zhuang Li• 2025

Related benchmarks

TaskDatasetResultRank
Safety EvaluationAdvBench--
117
Safety EvaluationStrongREJECT
Attack Success Rate6
45
Jailbreak Attack EvaluationTRIDENT CORE
HPR7
38
Red-teaming Safety EvaluationStrongREJECT
ASR9
32
Red-teaming Safety EvaluationHarmBench
ASR2
32
Red-teaming Safety EvaluationBasebench
HS1.74
16
Red-teaming Safety EvaluationEdgebench
HS Score2.36
16
Red-teaming Safety EvaluationSC-Safety
HS2.07
16
Safety EvaluationJailBreakV
ASR26
15
Safety EvaluationAdvBench (test)--
10
Showing 10 of 16 rows

Other info

Code

Follow for update