Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

StealthGraph: Exposing Domain-Specific Risks in LLMs through Knowledge-Graph-Guided Harmful Prompt Generation

About

Large language models (LLMs) are increasingly applied in specialized domains such as finance and healthcare, where they introduce unique safety risks. Domain-specific datasets of harmful prompts remain scarce and still largely rely on manual construction; public datasets mainly focus on explicit harmful prompts, which modern LLM defenses can often detect and refuse. In contrast, implicit harmful prompts-expressed through indirect domain knowledge-are harder to detect and better reflect real-world threats. We identify two challenges: transforming domain knowledge into actionable constraints and increasing the implicitness of generated harmful prompts. To address them, we propose an end-to-end framework that first performs knowledge-graph-guided harmful prompt generation to systematically produce domain-relevant prompts, and then applies dual-path obfuscation rewriting to convert explicit harmful prompts into implicit variants via direct and context-enhanced rewriting. This framework yields high-quality datasets combining strong domain relevance with implicitness, enabling more realistic red-teaming and advancing LLM safety research. We release our code and datasets at GitHub.

Huawei Zheng, Xinqi Jiang, Sen Yang, Shouling Ji, Yingcai Wu, Dazhen Deng• 2026

Related benchmarks

TaskDatasetResultRank
Jailbreak Attack EvaluationStealthGraph SG-Implicit--
12
Jailbreak Attack EvaluationAdvBench--
8
Jailbreak Attack EvaluationDo-Not-Answer--
6
Jailbreak Attack EvaluationHARMFULQA--
6
Jailbreak Attack EvaluationStealthGraph SG-Origin--
6
Language ModelingSG-Implicit
PPL79.87
2
Language ModelingSG Origin
Perplexity29.37
1
Language ModelingAdvBench--
1
Language ModelingDo-Not-Answer--
1
Language ModelingHARMFULQA--
1
Showing 10 of 10 rows

Other info

Follow for update