Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Your Weak LLM is Secretly a Strong Teacher for Alignment

About

The burgeoning capabilities of large language models (LLMs) have underscored the need for alignment to ensure these models act in accordance with human values and intentions. Existing alignment frameworks present constraints either in the form of expensive human effort or high computational costs. This paper explores a promising middle ground, where we employ a weak LLM that is significantly less resource-intensive than top-tier models, yet offers more automation than purely human feedback. We present a systematic study to evaluate and understand weak LLM's ability to generate feedback for alignment. Our empirical findings demonstrate that weak LLMs can provide feedback that rivals or even exceeds that of fully human-annotated data. Our study indicates a minimized impact of model size on feedback efficacy, shedding light on a scalable and sustainable alignment strategy. To deepen our understanding of alignment under weak LLM feedback, we conduct a series of qualitative and quantitative analyses, offering novel insights into the quality discrepancies between human feedback vs. weak LLM feedback.

Leitian Tao, Yixuan Li• 2024

Related benchmarks

TaskDatasetResultRank
SummarizationTL;DR
Winrate85.9
42
Preference AlignmentTL;DR (test)
Win Rate66.4
36
Preference AlignmentHH-RLHF (test)
Win Rate83.01
36
Preference AlignmentHH-RLHF--
31
Preference AlignmentUFB
Win Rate79.5
18
Preference AlignmentUFB (test)
Win Rate78.12
18
Showing 6 of 6 rows

Other info

Follow for update