daDPO: Distribution-Aware DPO for Distilling Conversational Abilities
About
Large language models (LLMs) have demonstrated exceptional performance across various applications, but their conversational abilities decline sharply as model size decreases, presenting a barrier to their deployment in resource-constrained environments. Knowledge distillation with Direct Preference Optimization (dDPO) has emerged as a promising approach to enhancing the conversational abilities of smaller models using a larger teacher model. However, current methods primarily focus on 'black-box' KD, which only uses the teacher's responses, overlooking the output distribution offered by the teacher. This paper addresses this gap by introducing daDPO (Distribution-Aware DPO), a unified method for preference optimization and distribution-based distillation. We provide rigorous theoretical analysis and empirical validation, showing that daDPO outperforms existing methods in restoring performance for pruned models and enhancing smaller LLM models. Notably, in in-domain evaluation, our method enables a 20% pruned Vicuna1.5-7B to achieve near-teacher performance (-7.3% preference rate compared to that of dDPO's -31%), and allows Qwen2.5-1.5B to occasionally outperform its 7B teacher model (14.0% win rate).
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-turn Dialogue Evaluation | MT-Bench | Overall Score6.1 | 331 | |
| Instruction Following | AlpacaEval | Win Rate81.49 | 125 | |
| Instruction Following | Arena Hard | Win Rate22.4 | 77 | |
| Instruction Following and Helpfulness Evaluation | AlpacaEval 2.0 | Win Rate16.41 | 58 | |
| Instruction Following | In-domain | Win Rate14 | 11 | |
| Preference Alignment Evaluation | Indomain | Win Rate-7.3 | 11 |