Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Continual Unlearning for Foundational Text-to-Image Models without Generalization Erosion

About

How can we effectively unlearn selected concepts from pre-trained generative foundation models without resorting to extensive retraining? This research introduces `continual unlearning', a novel paradigm that enables the targeted removal of multiple specific concepts from foundational generative models, incrementally. We propose Decremental Unlearning without Generalization Erosion (DUGE) algorithm which selectively unlearns the generation of undesired concepts while preserving the generation of related, non-targeted concepts and alleviating generalization erosion. For this, DUGE targets three losses: a cross-attention loss that steers the focus towards images devoid of the target concept; a prior-preservation loss that safeguards knowledge related to non-target concepts; and a regularization loss that prevents the model from suffering from generalization erosion. Experimental results demonstrate the ability of the proposed approach to exclude certain concepts without compromising the overall integrity and performance of the model. This offers a pragmatic solution for refining generative models, adeptly handling the intricacies of model training and concept management lowering the risks of copyright infringement, personal or licensed material misuse, and replication of distinctive artistic styles. Importantly, it maintains the non-targeted concepts, thereby safeguarding the model's core capabilities and effectiveness.

Kartik Thakral, Tamar Glaser, Tal Hassner, Mayank Vatsa, Richa Singh• 2025

Related benchmarks

TaskDatasetResultRank
Continual Concept Learning10 Sequential Concepts (test)
UA96
70
Showing 1 of 1 rows

Other info

Follow for update