Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning for Image and Video Generation

About

Visual generative models have achieved remarkable progress in synthesizing photorealistic images and videos, yet aligning their outputs with human preferences across critical dimensions remains a persistent challenge. Though reinforcement learning from human feedback offers promise for preference alignment, existing reward models for visual generation face limitations, including black-box scoring without interpretability and potentially resultant unexpected biases. We present VisionReward, a general framework for learning human visual preferences in both image and video generation. Specifically, we employ a hierarchical visual assessment framework to capture fine-grained human preferences, and leverages linear weighting to enable interpretable preference learning. Furthermore, we propose a multi-dimensional consistent strategy when using VisionReward as a reward model during preference optimization for visual generation. Experiments show that VisionReward can significantly outperform existing image and video reward models on both machine metrics and human evaluation. Notably, VisionReward surpasses VideoScore by 17.2% in preference prediction accuracy, and text-to-video models with VisionReward achieve a 31.6% higher pairwise win rate compared to the same models using VideoScore. All code and datasets are provided at https://github.com/THUDM/VisionReward.

Jiazheng Xu, Yu Huang, Jiale Cheng, Yuanming Yang, Jiajun Xu, Yuan Wang, Wenbo Duan, Shen Yang, Qunlin Jin, Shurun Li, Jiayan Teng, Zhuoyi Yang, Wendi Zheng, Xiao Liu, Dan Zhang, Ming Ding, Xiaohan Zhang, Xiaotao Gu, Shiyu Huang, Minlie Huang, Jie Tang, Yuxiao Dong• 2024

Related benchmarks

TaskDatasetResultRank
Video Preference AlignmentGenAI-Bench
Alignment Accuracy (w/ties)51.56
11
Video Reward AssessmentVideoGen-Reward Bench
VQ Accuracy (w/ Ties)47.43
9
Physical Plausibility and Subject DeformityInternal Dataset (ID train)
RM ACC58.45
8
TAInternal ID (train)
RM ACC0.3075
8
TACurated Prompt Set (OOD)
RM Accuracy0.2224
8
Physical Plausibility and Subject DeformityCurated Prompt Set (OOD)
RM ACC50.75
8
Video Preference ModelingGenAI-Bench (evaluation)
Tau52.6
7
Video Preference ModelingVideoGen-Reward (evaluation)
Tau (%)57.9
7
Video Preference ModelingMJBench Video (evaluation)
Tau54.1
7
Video Preference AlignmentMonteBench
Alignment Accuracy (w/ties)64
6
Showing 10 of 12 rows

Other info

Follow for update