Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Visual-Guided Key-Token Regularization for Multimodal Large Language Model Unlearning

About

Unlearning in Multimodal Large Language Models (MLLMs) prevents the model from revealing private information when queried about target images. Existing MLLM unlearning methods largely adopt approaches developed for LLMs. They treat all answer tokens uniformly, disregarding their varying importance in the unlearning process. Moreover, these methods focus exclusively on the language modality, disregarding visual cues that indicate key tokens in answers. In this paper, after formulating the problem of unlearning in multimodal question answering for MLLMs, we propose Visual-Guided Key-Token Regularization (ViKeR). We leverage irrelevant visual inputs to predict ideal post-unlearning token-level distributions and use these distributions to regularize the unlearning process, thereby prioritizing key tokens. Further, we define key tokens in unlearning via information entropy and discuss ViKeR's effectiveness through token-level gradient reweighting, which amplifies updates on key tokens. Experiments on MLLMU and CLEAR benchmarks demonstrate that our method effectively performs unlearning while mitigating forgetting and maintaining response coherence.

Chengyi Cai, Zesheng Ye, Peike Li, Bo Han, Jianzhong Qi, Feng Liu• 2026

Related benchmarks

TaskDatasetResultRank
MLLM UnlearningMLLMU 10% Task (Real)
Rouge24.6
5
MLLM UnlearningMLLMU 15% Task (retain)
ROUGE52.7
5
MLLM UnlearningMLLMU 15% Task (Real)
Rouge33.2
5
MLLM UnlearningMLLMU 10% Task (retain)
ROUGE32.4
5
MLLM UnlearningMLLMU 15% Task (forget)
Accuracy32
5
MLLM UnlearningMLLMU 15% Task (Generalization)
Accuracy32.4
5
MLLM UnlearningMLLMU 10% Task (Forget)
Accuracy30.4
5
MLLM UnlearningMLLMU 10% Task (Generalization)
Accuracy30.1
5
Identity RecognitionCLEAR (Forget)
Recall62
4
Identity RecognitionCLEAR (Retain)
Recall4.21
4
Showing 10 of 11 rows

Other info

Follow for update