Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Finding and Reactivating Post-Trained LLMs' Hidden Safety Mechanisms

About

Despite the impressive performance of general-purpose large language models (LLMs), they often require fine-tuning or post-training to excel at specific tasks. For instance, large reasoning models (LRMs), such as the DeepSeek-R1 series, demonstrate strong reasoning capabilities after post-training different general large language models on diverse chain-of-thought (CoT) datasets. However, this additional training frequently comes at the cost of reduced safety, as the fine-tuned or post-trained models tend to exhibit more harmful behaviors compared with the regular LLMs before post-training or fine-tuning, potentially leading to harmful outcomes due to their enhanced capabilities. Taking LRMs as an example, we first investigate the underlying cause of this safety degradation in this paper. Our analysis reveals that post-training can mask the original safety mechanisms of the base LLM, while over-amplifying representations related to their post-training ability. But luckily, we also find that LRMs' safety mechanisms still exist instead of being removed during their post-training. Based on these findings, we propose a lightweight and cost-effective solution called SafeReAct that restores the suppressed safety behaviors by aligning with LoRA adapters on a few layers. Experiments on four state-of-the-art LRMs show that our method significantly improves safety on harmful prompts without compromising reasoning performance. Besides LRMs, additional results on other domain-specific LLMs, like medical models, further confirm the generality and effectiveness of our approach.

Mingjie Li, Wai Man Si, Michael Backes, Yang Zhang, Yisen Wang• 2026

Related benchmarks

TaskDatasetResultRank
Safety EvaluationAdvBench--
117
ReasoningGSM8K--
106
ReasoningMATH 500
Accuracy (%)84
90
Medical Question AnsweringMedQA
Accuracy74
40
Safety EvaluationJailbreak Bench
ASR3
22
Safety EvaluationJailbreakBench
Harmful Rate0.00e+0
16
Safety EvaluationXSTest
Harmful Rate2
16
Jailbreak Safety EvaluationJailbreakBench--
9
Showing 8 of 8 rows

Other info

Follow for update