Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Beyond Superficial Unlearning: Sharpness-Aware Robust Erasure of Hallucinations in Multimodal LLMs

About

Multimodal LLMs are powerful but prone to object hallucinations, which describe non-existent entities and harm reliability. While recent unlearning methods attempt to mitigate this, we identify a critical flaw: structural fragility. We empirically demonstrate that standard erasure achieves only superficial suppression, trapping the model in sharp minima where hallucinations catastrophically resurge after lightweight relearning. To ensure geometric stability, we propose SARE, which casts unlearning as a targeted min-max optimization problem and uses a Targeted-SAM mechanism to explicitly flatten the loss landscape around hallucinated concepts. By suppressing hallucinations under simulated worst-case parameter perturbations, our framework ensures robust removal stable against weight shifts. Extensive experiments demonstrate that SARE significantly outperforms baselines in erasure efficacy while preserving general generation quality. Crucially, it maintains persistent hallucination suppression against relearning and parameter updates, validating the effectiveness of geometric stabilization.

Xianya Fang, Feiyang Ren, Xiang Chen, Yu Tian, Zhen Bi, Haiyang Yu, Sheng-Jun Huang• 2026

Related benchmarks

TaskDatasetResultRank
Text GenerationMSCOCO
BLEU-157.2
26
Object Hallucination DetectionMSCOCO
ChairS47.2
26
Text GenerationMiniGPT-4
BLEU-148.1
3
Text GenerationShareGPT4V
BLEU-147.9
3
Object Hallucination AssessmentMiniGPT-4
CHAIR Score (S)35.4
3
Object Hallucination AssessmentShareGPT4V
CHAIR-S Score34.7
3
Showing 6 of 6 rows

Other info

Follow for update