Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness

About

Traditional feedback learning for hallucination reduction relies on labor-intensive manual labeling or expensive proprietary models. This leaves the community without foundational knowledge about how to build high-quality feedback with open-source MLLMs. In this work, we introduce RLAIF-V, a novel framework that aligns MLLMs in a fully open-source paradigm. RLAIF-V maximally explores open-source MLLMs from two perspectives, including high-quality feedback data generation for preference learning and self-feedback guidance for inference-time scaling. Extensive experiments on six benchmarks in both automatic and human evaluation show that RLAIF-V substantially enhances the trustworthiness of models at both preference learning and inference time. RLAIF-V 7B reduces object hallucination by 80.7\% and overall hallucination by 33.7\%. Remarkably, RLAIF-V 12B further reveals the self-alignment potential of open-source MLLMs, where the model can learn from feedback of itself to achieve super GPT-4V trustworthiness.

Tianyu Yu, Haoye Zhang, Qiming Li, Qixin Xu, Yuan Yao, Da Chen, Xiaoman Lu, Ganqu Cui, Yunkai Dang, Taiwen He, Xiaocheng Feng, Jun Song, Bo Zheng, Zhiyuan Liu, Tat-Seng Chua, Maosong Sun• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy75.2
1165
Visual Question AnsweringTextVQA
Accuracy55.1
1117
Object Hallucination EvaluationPOPE--
935
Hallucination EvaluationMMHal-Bench
MMHal Score3.44
174
Hallucination EvaluationHallusionBench--
93
Hallucination EvaluationAMBER
F1 Score90.9
71
Science Question AnsweringScienceQA
IMG Score68.2
49
Object Hallucination EvaluationCHAIR
CS Score18.1
49
Vision-Language UnderstandingMM-Vet
Total Score29.9
43
Hallucination assessmentObject-HalBench
Mention Hallucination Rate2.6
39
Showing 10 of 16 rows

Other info

Code

Follow for update