WizardCoder: Empowering Code Large Language Models with Evol-Instruct
About
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Commonsense Reasoning | HellaSwag | Accuracy65.06 | 1460 | |
| Code Generation | HumanEval | Pass@173.2 | 850 | |
| Multi-task Language Understanding | MMLU | Accuracy32.29 | 842 | |
| Code Generation | HumanEval (test) | Pass@173.8 | 444 | |
| Code Generation | MBPP (test) | Pass@173.2 | 276 | |
| Commonsense Reasoning | WinoGrande | Accuracy61.72 | 231 | |
| Code Generation | HumanEval+ | Pass@156.7 | 189 | |
| Code Generation | MBPP | Pass@151.8 | 175 | |
| Question Answering | ARC | Accuracy41.81 | 154 | |
| Code Generation | HumanEval 1.0 (test) | Pass@173.2 | 145 |