Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

OmniAlign-V: Towards Enhanced Alignment of MLLMs with Human Preference

About

Recent advancements in open-source multi-modal large language models (MLLMs) have primarily focused on enhancing foundational capabilities, leaving a significant gap in human preference alignment. This paper introduces OmniAlign-V, a comprehensive dataset of 200K high-quality training samples featuring diverse images, complex questions, and varied response formats to improve MLLMs' alignment with human preferences. We also present MM-AlignBench, a human-annotated benchmark specifically designed to evaluate MLLMs' alignment with human values. Experimental results show that finetuning MLLMs with OmniAlign-V, using Supervised Fine-Tuning (SFT) or Direct Preference Optimization (DPO), significantly enhances human preference alignment while maintaining or enhancing performance on standard VQA benchmarks, preserving their fundamental capabilities. Our datasets, benchmark, code and checkpoints have been released at https://github.com/PhoenixZ810/OmniAlign-V.

Xiangyu Zhao, Shengyuan Ding, Zicheng Zhang, Haian Huang, Maosong Cao, Weiyun Wang, Jiaqi Wang, Xinyu Fang, Wenhai Wang, Guangtao Zhai, Haodong Duan, Hua Yang, Kai Chen• 2025

Related benchmarks

TaskDatasetResultRank
OCR EvaluationOCRBench
Score58.9
296
Multimodal UnderstandingMMMU
Accuracy60.7
275
Diagram Question AnsweringAI2D
AI2D Accuracy81.7
196
Multimodal Question AnsweringMM-Vet
Total Score56.9
24
Multi-modal UnderstandingMMBench v1.1
Accuracy80.6
22
Human Preference AlignmentMM-AlignBench 1.0 (test)
Win Rate72.6
18
Multi-modal human-preference alignmentMIA-Bench
Score89.6
6
Multi-modal preference alignmentMM-AlignBench
Winning Rate62.3
6
Multi-modal preference alignmentWildVision
Winning Rate40.2
6
Showing 9 of 9 rows

Other info

Code

Follow for update