Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Quality-constrained Entropy Maximization Policy Optimization for LLM Diversity

About

Recent research indicates that while alignment methods significantly improve the quality of large language model(LLM) outputs, they simultaneously reduce the diversity of the models' output. Although some methods have been proposed to enhance LLM output diversity, they often come at the cost of reduced performance. In this work, we first theoretically demonstrate that the alignment task can be decomposed into two distributions: quality and diversity. To enhance the diversity of LLM outputs while ensuring quality, we propose the Quality-constrained Entropy Maximization Policy Optimization (QEMPO). QEMPO aims to maximize the output entropy of the policy while ensuring output quality. By adding different constraints to QEMPO, we obtain different policies. To optimize policies, we propose both online and offline training methods. Experiments validate that QEMPO achieves performance comparable to or even better than RLHF while improving output diversity.

Haihui Pan, Yuzhong Hong, Shaoke Lv, Junwei Bao, Hongfei Jiang, Yang Song• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingMT-Bench
MT-Bench Score7.96
189
Output DiversityMT-Bench
Lexical Diversity Score48.36
20
Showing 2 of 2 rows

Other info

Follow for update