Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

T2VUnlearning: A Concept Erasing Method for Text-to-Video Diffusion Models

About

Recent advances in text-to-video (T2V) diffusion models have significantly enhanced the quality of generated videos. However, their capability to produce explicit or harmful content introduces new challenges related to misuse and potential rights violations. To address this newly emerging threat, we propose unlearning-based concept erasing as a solution. First, we adopt negatively-guided velocity prediction fine-tuning and enhance it with prompt augmentation to ensure robustness against prompts refined by large language models (LLMs). Second, to achieve precise unlearning, we incorporate mask-based localization regularization and concept preservation regularization to preserve the model's ability to generate non-target concepts. Extensive experiments demonstrate that our method effectively erases a specific concept while preserving the model's generation capability for all other concepts, outperforming existing methods. We provide the unlearned models in \href{https://github.com/VDIGPKU/T2VUnlearning.git}{https://github.com/VDIGPKU/T2VUnlearning.git}.

Xiaoyu Ye, Songjie Cheng, Yongtao Wang, Yajiao Xiong, Yishen Li• 2025

Related benchmarks

TaskDatasetResultRank
Video Nudity ErasureRing-a-Bell
Nudity Rate6.97
6
Video Nudity ErasureGen
Nudity Rate19.73
6
Video Generation QualityVBench
Object Class Acc87
6
Object ErasureImageNet
ESR (1% Erasure)92.38
5
Showing 4 of 4 rows

Other info

Follow for update