Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

STAR-1: Safer Alignment of Reasoning LLMs with 1K Data

About

This paper introduces STAR-1, a high-quality, just-1k-scale safety dataset specifically designed for large reasoning models (LRMs) like DeepSeek-R1. Built on three core principles -- diversity, deliberative reasoning, and rigorous filtering -- STAR-1 aims to address the critical needs for safety alignment in LRMs. Specifically, we begin by integrating existing open-source safety datasets from diverse sources. Then, we curate safety policies to generate policy-grounded deliberative reasoning samples. Lastly, we apply a GPT-4o-based safety scoring system to select training examples aligned with best practices. Experimental results show that fine-tuning LRMs with STAR-1 leads to an average 40% improvement in safety performance across four benchmarks, while only incurring a marginal decrease (e.g., an average of 1.1%) in reasoning ability measured across five reasoning tasks. Extensive ablation studies further validate the importance of our design principles in constructing STAR-1 and analyze its efficacy across both LRMs and traditional LLMs. Our project page is https://ucsc-vlaa.github.io/STAR-1.

Zijun Wang, Haoqin Tu, Yuhan Wang, Juncheng Wu, Yanqing Liu, Jieru Mei, Brian R. Bartoldson, Bhavya Kailkhura, Cihang Xie• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH
Accuracy92
535
Science Question AnsweringARC Challenge--
342
Mathematical ReasoningAIME
AIME Accuracy83.3
288
Science ReasoningGPQA
Accuracy58.6
243
Mathematical ReasoningMATH 500
pass@190.58
239
Code GenerationHumanEval
Pass@180
171
ReasoningGPQA Diamond--
135
Jailbreak DefenseWild Jailbreak
ASR8.9
114
Code GenerationHumanEval
Accuracy50
99
TruthfulnessTruthfulQA
Truthfulness Accuracy20.5
86
Showing 10 of 60 rows

Other info

Follow for update