Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Compositional preference models for aligning LMs

About

As language models (LMs) become more capable, it is increasingly important to align them with human preferences. However, the dominant paradigm for training Preference Models (PMs) for that purpose suffers from fundamental limitations, such as lack of transparency and scalability, along with susceptibility to overfitting the preference dataset. We propose Compositional Preference Models (CPMs), a novel PM framework that decomposes one global preference assessment into several interpretable features, obtains scalar scores for these features from a prompted LM, and aggregates these scores using a logistic regression classifier. Through these simple steps, CPMs allow to control which properties of the preference data are used to train the preference model and to build it based on features that are believed to underlie the human preference judgment. Our experiments show that CPMs not only improve generalization and are more robust to overoptimization than standard PMs, but also that best-of-n samples obtained using CPMs tend to be preferred over samples obtained using conventional PMs. Overall, our approach demonstrates the benefits of endowing PMs with priors about which features determine human preferences while relying on LM capabilities to extract those features in a scalable and robust way.

Dongyoung Go, Tomasz Korbak, Germ\'an Kruszewski, Jos Rozen, Marc Dymetman• 2023

Related benchmarks

TaskDatasetResultRank
Long-form Question AnsweringLong-form QA (test)
Win Rate vs. Holistic Reward59.8
13
Machine TranslationMT (test)
Average Win Rate49.8
12
Showing 2 of 2 rows

Other info

Follow for update