Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SineProject: Machine Unlearning for Stable Vision Language Alignment

About

Multimodal Large Language Models (MLLMs) increasingly need to forget specific knowledge such as unsafe or private information without requiring full retraining. However, existing unlearning methods often disrupt vision language alignment, causing models to reject both harmful and benign queries. We trace this failure to the projector network during unlearning, its Jacobian becomes severely illconditioned, leading to unstable optimization and drift in cross modal embeddings. We introduce SineProject, a simple method that augments the frozen projector with sinusoidally modulated trainable parameters, improving the Jacobian's spectral conditioning and stabilizing alignment throughout unlearning. Across standard safety and privacy unlearning benchmarks using LLaVA v1.5 7B and 13B, SineProject reduces benign query refusals while achieving complete forgetting of targeted information, yielding state of the art forget retain trade offs with negligible computational overhead.

Arpit Garg, Hemanth Saratchandran, Simon Lucey• 2025

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy52.1
1525
Object Hallucination EvaluationPOPE--
1455
Science Question AnsweringScienceQA (SQA)
Accuracy72.1
273
Visual ReasoningGQA
Accuracy61.9
93
Multi-modal UnderstandingMMBench EN
Accuracy26.4
64
Visual Question AnsweringVQA
Accuracy60.4
52
Multimodal Machine Unlearning EvaluationMLLMU-Bench Forget Set
Classification Accuracy43.28
36
Multimodal Machine UnlearningRetain Set
Classification Accuracy48.13
35
Multimodal Machine Unlearning EvaluationMLLMU-Bench Real Celebrity
Class Acc56.41
28
Multimodal Machine Unlearning EvaluationMLLMU-Bench (test)
Classification Accuracy42.67
27
Showing 10 of 12 rows

Other info

Follow for update