Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Knowledge Vector Weakening: Efficient Training-free Unlearning for Large Vision-Language Models

About

Large Vision-Language Models (LVLMs) are widely adopted for their strong multimodal capabilities, yet they raise serious concerns such as privacy leakage and harmful content generation. Machine unlearning has emerged as a promising solution for removing the influence of specific data from trained models. However, existing approaches largely rely on gradient-based optimization, incurring substantial computational costs for large-scale LVLMs. To address this limitation, we propose Knowledge Vector Weakening (KVW), a training-free unlearning method that directly intervenes in the full model without gradient computation. KVW identifies knowledge vectors that are activated during the model's output generation on the forget set and progressively weakens their contributions, thereby preventing the model from exploiting undesirable knowledge. Experiments on the MLLMU and CLEAR benchmarks demonstrate that KVW achieves a stable forget-retain trade-off while significantly improving computational efficiency over gradient-based and LoRA-based unlearning methods.

Yejin Kim, Dongjun Hwang, Sungmin Cha, Junsuk Choe• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal Machine UnlearningMLLMU-Bench LLaVA-1.5-7B (test 1)
Forget Rate60.2
24
Multimodal Machine UnlearningMLLMU-Bench LLaVA-1.5-7B (test 2)
Forget Rate57.6
24
Machine UnlearningCLEAR (test 1)
Forget Accuracy1
16
Machine UnlearningCLEAR (test 2)
Forget Accuracy0.00e+0
16
Showing 4 of 4 rows

Other info

Follow for update