Knowledge Vector Weakening: Efficient Training-free Unlearning for Large Vision-Language Models
About
Large Vision-Language Models (LVLMs) are widely adopted for their strong multimodal capabilities, yet they raise serious concerns such as privacy leakage and harmful content generation. Machine unlearning has emerged as a promising solution for removing the influence of specific data from trained models. However, existing approaches largely rely on gradient-based optimization, incurring substantial computational costs for large-scale LVLMs. To address this limitation, we propose Knowledge Vector Weakening (KVW), a training-free unlearning method that directly intervenes in the full model without gradient computation. KVW identifies knowledge vectors that are activated during the model's output generation on the forget set and progressively weakens their contributions, thereby preventing the model from exploiting undesirable knowledge. Experiments on the MLLMU and CLEAR benchmarks demonstrate that KVW achieves a stable forget-retain trade-off while significantly improving computational efficiency over gradient-based and LoRA-based unlearning methods.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multimodal Machine Unlearning | MLLMU-Bench LLaVA-1.5-7B (test 1) | Forget Rate60.2 | 24 | |
| Multimodal Machine Unlearning | MLLMU-Bench LLaVA-1.5-7B (test 2) | Forget Rate57.6 | 24 | |
| Machine Unlearning | CLEAR (test 1) | Forget Accuracy1 | 16 | |
| Machine Unlearning | CLEAR (test 2) | Forget Accuracy0.00e+0 | 16 |