Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Dynamic Noise Preference Optimization: Self-Improvement of Large Language Models with Self-Synthetic Data

About

Although LLMs have achieved significant success, their reliance on large volumes of human-annotated data has limited their potential for further scaling. In this situation, utilizing self-generated synthetic data has become crucial for fine-tuning LLMs without extensive human annotation. However, current methods often fail to ensure consistent improvements across iterations, with performance stagnating after only minimal updates. To overcome these challenges, we introduce Dynamic Noise Preference Optimization (DNPO), which combines dynamic sample labeling for constructing preference pairs with controlled, trainable noise injection during preference optimization. Our approach effectively prevents stagnation and enables continuous improvement. In experiments with Llama-3.2-3B and Zephyr-7B, DNPO consistently outperforms existing methods across multiple benchmarks. Additionally, with Zephyr-7B, DNPO shows a significant improvement in model-generated data quality, with a 29.4% win-loss rate gap compared to the baseline in GPT-4 evaluations.

Haoyan Yang, Khiem Le, Ting Hua, Shangqian Gao, Binfeng Xu, Zheng Tang, Jie Xu, Nitesh V. Chawla, Hongxia Jin, Vijay Srinivasan• 2025

Related benchmarks

TaskDatasetResultRank
Large Language Model EvaluationARC, TruthfulQA, Winogrande, GSM8K, HellaSwag, MMLU
ARC Accuracy73.7
16
Showing 1 of 1 rows

Other info

Follow for update