Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MMUnlearner: Reformulating Multimodal Machine Unlearning in the Era of Multimodal Large Language Models

About

Recent progress in Machine Unlearning (MU) has introduced solutions for the selective removal of private or sensitive information encoded within deep neural networks. Nonetheless, MU for Multimodal Large Language Models (MLLMs) remains in its nascent phase. Therefore, we propose to reformulate the task of multimodal MU in the era of MLLMs, which aims to erase only the visual patterns associated with a given entity while preserving the corresponding textual knowledge encoded within the original parameters of the language model backbone. Furthermore, we develop a novel geometry-constrained gradient ascent method MMUnlearner. It updates the weights of MLLMs with a weight saliency map jointly restricted by the remaining concepts and textual knowledge during unlearning, thereby preserving parameters essential for non-target knowledge. Extensive experiments demonstrate that MMUnlearner surpasses baselines that finetuning MLLMs with VQA data directly through Gradient Ascent (GA) or Negative Preference Optimization (NPO), across all evaluation dimensions. Our code can be found in [this URL](https://github.com/Z1zs/MMUnlearner).

Jiahao Huo, Yibo Yan, Xu Zheng, Yuanhuiyi Lyu, Xin Zou, Zhihua Wei, Xuming Hu• 2025

Related benchmarks

TaskDatasetResultRank
Multimodal Machine Unlearning EvaluationMLLMU-Bench Forget Set
Classification Accuracy44.85
36
Multimodal Machine UnlearningRetain Set
Classification Accuracy43.18
35
Visual Question AnsweringCLEAR 1.0 (Retain)
Accuracy68.9
32
Multimodal Machine Unlearning EvaluationMLLMU-Bench Real Celebrity
Class Acc50.28
28
Multimodal Machine Unlearning EvaluationMLLMU-Bench (test)
Classification Accuracy43.95
27
Question AnsweringCLEAR Real-world 1.0
Acc94.7
16
Visual Question AnsweringMLLMU-Bench Forget 1.0
Accuracy31.2
16
Visual Question AnsweringCLEAR Forget 1.0
Accuracy36.2
16
Machine UnlearningCLEAR (test 1)
Forget Accuracy29
16
Question AnsweringMLLMU-Bench Forget 1.0
Accuracy54.4
16
Showing 10 of 19 rows

Other info

Follow for update