Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Selective Fine-Tuning for Targeted and Robust Concept Unlearning

About

Text guided diffusion models are used by millions of users, but can be easily exploited to produce harmful content. Concept unlearning methods aim at reducing the models' likelihood of generating harmful content. Traditionally, this has been tackled at an individual concept level, with only a handful of recent works considering more realistic concept combinations. However, state of the art methods depend on full finetuning, which is computationally expensive. Concept localisation methods can facilitate selective finetuning, but existing techniques are static, resulting in suboptimal utility. In order to tackle these challenges, we propose TRUST (Targeted Robust Selective fine Tuning), a novel approach for dynamically estimating target concept neurons and unlearning them through selective finetuning, empowered by a Hessian based regularization. We show experimentally, against a number of SOTA baselines, that TRUST is robust against adversarial prompts, preserves generation quality to a significant degree, and is also significantly faster than the SOTA. Our method achieves unlearning of not only individual concepts but also combinations of concepts and conditional concepts, without any specific regularization.

Mansi, Avinash Kori, Francesca Toni, Soteris Demetriou• 2026

Related benchmarks

TaskDatasetResultRank
Concept UnlearningUnlearnDiffAtk
UnlearnDiffAtk0.0118
36
Concept UnlearningRing-a-Bell
Ring-A-Bell Score0.83
20
Text-to-Image GenerationNon-targeted concepts
CLIP Score30.95
18
Concept UnlearningI2P
I2P0.0011
17
Concept UnlearningMMA-Diffusion
MMA-Diffusion7.7
16
Concept UnlearningP4D
P4D0.0027
14
Showing 6 of 6 rows

Other info

Follow for update