Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-Rewarding Language Models

About

We posit that to achieve superhuman agents, future models require superhuman feedback in order to provide an adequate training signal. Current approaches commonly train reward models from human preferences, which may then be bottlenecked by human performance level, and secondly these separate frozen reward models cannot then learn to improve during LLM training. In this work, we study Self-Rewarding Language Models, where the language model itself is used via LLM-as-a-Judge prompting to provide its own rewards during training. We show that during Iterative DPO training that not only does instruction following ability improve, but also the ability to provide high-quality rewards to itself. Fine-tuning Llama 2 70B on three iterations of our approach yields a model that outperforms many existing systems on the AlpacaEval 2.0 leaderboard, including Claude 2, Gemini Pro, and GPT-4 0613. While there is much left still to explore, this work opens the door to the possibility of models that can continually improve in both axes.

Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Xian Li, Sainbayar Sukhbaatar, Jing Xu, Jason Weston• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy93.26
1460
Visual Question AnsweringVizWiz
Accuracy56.1
1043
Visual Question AnsweringGQA--
963
Mathematical ReasoningGSM8K (test)
Accuracy76.04
797
Language UnderstandingMMLU
Accuracy33
756
Commonsense ReasoningPIQA
Accuracy47.41
647
ReasoningBBH
Accuracy31.2
507
Mathematical ReasoningMATH (test)
Overall Accuracy30.19
433
Question AnsweringARC Easy
Normalized Acc77
385
Multimodal UnderstandingMMBench--
367
Showing 10 of 43 rows

Other info

Follow for update