Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Provably avoiding over-optimization in Direct Preference Optimization without knowing the data distribution

About

We introduce PEPO (Pessimistic Ensemble based Preference Optimization), a single-step Direct Preference Optimization (DPO)-like algorithm to mitigate the well-known over-optimization issue in preference learning without requiring the knowledge of the data-generating distribution or learning an explicit reward model. PEPO achieves pessimism via an ensemble of preference-optimized policies trained on disjoint data subsets and then aggregates them through a worst case construction that favors the agreement across models. In the tabular setting, PEPO achieves sample complexity guarantees depending only on a single-policy concentrability coefficient, thus avoiding the all-policy concentrability which affects the guarantees of algorithms prone to over-optimization, such as DPO. The theoretical findings are corroborated by a convincing practical performance, while retaining the simplicity and the practicality of DPO-style training.

Adam Barla, Emanuele Nevali, Luca Viano, Volkan Cevher• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingAlpacaEval (test)
Helpfulness Score87.2
32
Showing 1 of 1 rows

Other info

Follow for update