Lion: Adversarial Distillation of Proprietary Large Language Models
About
The practice of transferring knowledge from a sophisticated, proprietary large language model (LLM) to a compact, open-source LLM has garnered considerable attention. Previous works have focused on a unidirectional knowledge distillation way by aligning the responses of the student model with those of the teacher model to a set of instructions. Nevertheless, they overlooked the possibility of incorporating any reciprocal "feedback"--identifying challenging instructions where the student model's performance falls short--to boost the student model's proficiency iteratively. To this end, we propose a novel adversarial distillation framework for a more efficient knowledge transfer. Leveraging the versatile role adaptability of LLMs, we prompt the teacher model to identify "hard" instructions and generate new "hard" instructions for the student model, creating a three-stage adversarial loop of imitation, discrimination, and generation. By applying this adversarial framework, we successfully transfer knowledge from ChatGPT to a student model (named Lion), using a mere 70k training data. Our results show that Lion-13B not only achieves comparable open-ended generation capabilities to ChatGPT but surpasses conventional state-of-the-art (SOTA) instruction-tuned models like Vicuna-13B by 55.4% in challenging zero-shot reasoning benchmarks such as BIG-Bench Hard (BBH) and 16.7% on AGIEval. Code and model can be found at https://github.com/YJiangcm/Lion.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | MATH | Accuracy9.79 | 643 | |
| Code Generation | MBPP | Pass@139.63 | 175 | |
| Mathematical Reasoning | GSM8K | Math Score54.76 | 171 | |
| Code Generation | HumanEval | Pass@129.73 | 108 | |
| Instruction Following | DollyEval | Score38.93 | 106 | |
| Agentic Reasoning | ∞Bench | Score55.73 | 100 | |
| Code Generation | LiveCodeBench | Pass@120.23 | 86 | |
| Instruction Following | VicunaEval | VicunaEval Score36.52 | 80 | |
| Code Generation | LiveCodeBench | Average Score23.44 | 68 | |
| Code Generation | MBPP | Score43.84 | 38 |