Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CNT: Safety-oriented Function Reuse across LLMs via Cross-Model Neuron Transfer

About

The widespread deployment of large language models (LLMs) calls for post-hoc methods that can flexibly adapt models to evolving safety requirements. Meanwhile, the rapidly expanding open-source LLM ecosystem has produced a diverse collection of models that already exhibit various safety-related functionalities. This motivates a shift from constructing safety functionality from scratch to reusing existing functionality from external models, thereby avoiding costly data collection and training procedures. In this paper, we present Cross-Model Neuron Transfer (CNT), a post-hoc method that reuses safety-oriented functionality by transferring a minimal subset of neurons from an open-source donor LLM to a target LLM. By operating at the neuron level, CNT enables modular function-level adaptation, supporting both function addition andfunction deletion. We evaluate CNT on seven popular LLMs across three representative applications: safety disalignment, alignment enhancement, and bias removal. Experimental results show that CNT achieves targeted safety-oriented functionality transfer with minimal performance degradation (less than 1% for most models), consistently outperforming five baselines, demonstrating its generality and practical effectiveness.

Yue Zhao, Yujia Gong, Ruigang Liang, Shenchen Zhu, Kai Chen, Xuejing Yuan, Wangjun Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Function DeletionHarmfulBench
Delta Refusal Rate (%)-99
23
Language ModelingMMLU
MMLU Final Performance68.25
23
Utility EvaluationMMLU
ΔMMLU0.2
17
Utility EvaluationNQ-Open
Delta NQ-Open5.13
17
Function AdditionMMLU Safety Alignment
Change in RA (%)44.67
6
Bias Mitigationbias-bench
SS Score64.73
5
Showing 6 of 6 rows

Other info

Follow for update