Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

mDPO: Conditional Preference Optimization for Multimodal Large Language Models

About

Direct preference optimization (DPO) has shown to be an effective method for large language model (LLM) alignment. Recent works have attempted to apply DPO to multimodal scenarios but have found it challenging to achieve consistent improvement. Through a comparative experiment, we identify the unconditional preference problem in multimodal preference optimization, where the model overlooks the image condition. To address this problem, we propose mDPO, a multimodal DPO objective that prevents the over-prioritization of language-only preferences by also optimizing image preference. Moreover, we introduce a reward anchor that forces the reward to be positive for chosen responses, thereby avoiding the decrease in their likelihood -- an intrinsic problem of relative preference optimization. Experiments on two multimodal LLMs of different sizes and three widely used benchmarks demonstrate that mDPO effectively addresses the unconditional preference problem in multimodal preference optimization and significantly improves model performance, particularly in reducing hallucination.

Fei Wang, Wenxuan Zhou, James Y. Huang, Nan Xu, Sheng Zhang, Hoifung Poon, Muhao Chen• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA
Accuracy55.7
1117
Hallucination EvaluationMMHal-Bench
MMHal Score2.8
174
Hallucination EvaluationHallusionBench--
93
Hallucination EvaluationAMBER--
71
Science Question AnsweringScienceQA
IMG Score69.4
49
Hallucination assessmentAMBER
CHAIR_s5
47
Generative HallucinationObject-HalBench
CHAIR_S Score16.6
33
Hallucination EvaluationObject-HalBench
CHAIR Score (s)33.3
28
Hallucination EvaluationMOH
HR^D50.7
21
Hallucination EvaluationHallusionBench (test)
Question Pair Accuracy16.48
4
Showing 10 of 10 rows

Other info

Follow for update