Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

EFUF: Efficient Fine-grained Unlearning Framework for Mitigating Hallucinations in Multimodal Large Language Models

About

Multimodal large language models (MLLMs) have attracted increasing attention in the past few years, but they may still generate descriptions that include objects not present in the corresponding images, a phenomenon known as object hallucination. To eliminate hallucinations, existing methods manually annotate paired responses with and without hallucinations, and then employ various alignment algorithms to improve the alignment capability between images and text. However, they not only demand considerable computation resources during the finetuning stage but also require expensive human annotation to construct paired data needed by the alignment algorithms. To address these issues, we borrow the idea of unlearning and propose an efficient fine-grained unlearning framework (EFUF), which can eliminate hallucinations without the need for paired data. Extensive experiments show that our method consistently reduces hallucinations while preserving the generation quality with modest computational overhead. Our code and datasets will be publicly available.

Shangyu Xing, Fei Zhao, Zhen Wu, Tuo An, Weihao Chen, Chunhui Li, Jianbing Zhang, Xinyu Dai• 2024

Related benchmarks

TaskDatasetResultRank
Object Hallucination DetectionMSCOCO
ChairS59.1
26
Text GenerationMSCOCO
BLEU-152.3
26
Object Hallucination AssessmentMiniGPT-4
CHAIR Score (S)38.9
3
Object Hallucination AssessmentShareGPT4V
CHAIR-S Score36.9
3
Text GenerationMiniGPT-4
BLEU-145.6
3
Text GenerationShareGPT4V
BLEU-146.9
3
Showing 6 of 6 rows

Other info

Follow for update