Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

UltraFeedback: Boosting Language Models with Scaled AI Feedback

About

Learning from human feedback has become a pivot technique in aligning large language models (LLMs) with human preferences. However, acquiring vast and premium human feedback is bottlenecked by time, labor, and human capability, resulting in small sizes or limited topics of current datasets. This further hinders feedback learning as well as alignment research within the open-source community. To address this issue, we explore how to go beyond human feedback and collect high-quality \textit{AI feedback} automatically for a scalable alternative. Specifically, we identify \textbf{scale and diversity} as the key factors for feedback data to take effect. Accordingly, we first broaden instructions and responses in both amount and breadth to encompass a wider range of user-assistant interactions. Then, we meticulously apply a series of techniques to mitigate annotation biases for more reliable AI feedback. We finally present \textsc{UltraFeedback}, a large-scale, high-quality, and diversified AI feedback dataset, which contains over 1 million GPT-4 feedback for 250k user-assistant conversations from various aspects. Built upon \textsc{UltraFeedback}, we align a LLaMA-based model by best-of-$n$ sampling and reinforcement learning, demonstrating its exceptional performance on chat benchmarks. Our work validates the effectiveness of scaled AI feedback data in constructing strong open-source chat language models, serving as a solid foundation for future feedback learning research. Our data and models are available at https://github.com/thunlp/UltraFeedback.

Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Bingxiang He, Wei Zhu, Yuan Ni, Guotong Xie, Ruobing Xie, Yankai Lin, Zhiyuan Liu, Maosong Sun• 2023

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy56.27
1362
Commonsense ReasoningWinoGrande--
1085
Code GenerationHumanEval
Pass@141.46
1036
Mathematical ReasoningMATH
Accuracy20.85
882
Multi-task Language UnderstandingMMLU
Accuracy58.7
876
ReasoningBBH--
672
Instruction FollowingIFEval--
625
Multi-turn Dialogue EvaluationMT-Bench
Overall Score6
447
Code GenerationHumanEval+
Pass@123.05
383
Question AnsweringTriviaQA
Accuracy54
238
Showing 10 of 74 rows
...

Other info

Follow for update