Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GRP-Obliteration: Unaligning LLMs With a Single Unlabeled Prompt

About

Safety alignment is only as robust as its weakest failure mode. Despite extensive work on safety post-training, it has been shown that models can be readily unaligned through post-deployment fine-tuning. However, these methods often require extensive data curation and degrade model utility. In this work, we extend the practical limits of unalignment by introducing GRP-Obliteration (GRP-Oblit), a method that uses Group Relative Policy Optimization (GRPO) to directly remove safety constraints from target models. We show that a single unlabeled prompt is sufficient to reliably unalign safety-aligned models while largely preserving their utility, and that GRP-Oblit achieves stronger unalignment on average than existing state-of-the-art techniques. Moreover, GRP-Oblit generalizes beyond language models and can also unalign diffusion-based image generation systems. We evaluate GRP-Oblit on six utility benchmarks and five safety benchmarks across fifteen 7-20B parameter models, spanning instruct and reasoning models, as well as dense and MoE architectures. The evaluated model families include GPT-OSS, distilled DeepSeek, Gemma, Llama, Ministral, and Qwen.

Mark Russinovich, Yanan Cai, Keegan Hines, Giorgio Severi, Blake Bullwinkel, Ahmed Salem• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy83.3
1460
Multi-task Language UnderstandingMMLU
Accuracy78.9
842
Commonsense ReasoningWinoGrande
Accuracy78.8
776
Jailbreak AttackHarmBench
Attack Success Rate (ASR)97
376
Instruction FollowingIFEval--
292
Math ReasoningGSM8K
Accuracy91.2
126
Truthfulness EvaluationTruthfulQA
Accuracy66.7
93
Jailbreak AttackStrongREJECT
Attack Success Rate76
88
JailbreakSorry
Jailbreak Rate98.2
70
JailbreakJBB
Jailbreak Rate77
70
Showing 10 of 11 rows

Other info

Follow for update