Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Zephyr: Direct Distillation of LM Alignment

About

We aim to produce a smaller language model that is aligned to user intent. Previous research has shown that applying distilled supervised fine-tuning (dSFT) on larger models significantly improves task accuracy; however, these models are unaligned, i.e. they do not respond well to natural prompts. To distill this property, we experiment with the use of preference data from AI Feedback (AIF). Starting from a dataset of outputs ranked by a teacher model, we apply distilled direct preference optimization (dDPO) to learn a chat model with significantly improved intent alignment. The approach requires only a few hours of training without any additional sampling during fine-tuning. The final result, Zephyr-7B, sets the state-of-the-art on chat benchmarks for 7B parameter models, and requires no human annotation. In particular, results on MT-Bench show that Zephyr-7B surpasses Llama2-Chat-70B, the best open-access RLHF-based model. Code, models, data, and tutorials for the system are available at https://github.com/huggingface/alignment-handbook.

Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Kashif Rasul, Younes Belkada, Shengyi Huang, Leandro von Werra, Cl\'ementine Fourrier, Nathan Habib, Nathan Sarrazin, Omar Sanseviero, Alexander M. Rush, Thomas Wolf• 2023

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy82.79
1460
Mathematical ReasoningGSM8K
Accuracy61.63
983
Code GenerationHumanEval--
850
Multi-task Language UnderstandingMMLU
Accuracy58.9
842
Commonsense ReasoningWinoGrande
Accuracy74.19
776
Language UnderstandingMMLU
Accuracy56.9
756
ReasoningBBH--
507
Multi-turn Dialogue EvaluationMT-Bench
Overall Score7.34
331
Instruction FollowingIFEval
Accuracy (0-100)43.3
292
Instruction FollowingAlpacaEval 2.0
LC Win Rate14.78
281
Showing 10 of 60 rows

Other info

Code

Follow for update