Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning

About

Fine-tuning large language models (LLMs) for downstream tasks is an essential stage of modern AI deployment. Reinforcement learning (RL) has emerged as the dominant fine-tuning paradigm, underpinning many state-of-the-art LLMs. In contrast, evolution strategies (ES) has largely been overlooked due to the widespread belief that it does not scale to modern model sizes. This paper overturns this assumption by demonstrating the first successful application of ES to full-parameter fine-tuning of LLMs at the billion-parameter scale, without dimensionality reduction. ES can indeed search over extremely high-dimensional parameter spaces and outperform established RL implementations across multiple axes, including improved tolerance to long-horizon and delayed rewards, robustness across diverse base LLMs, reduced susceptibility to reward hacking, and improved training stability. These findings suggest that ES is not merely a viable alternative to RL, but a fundamentally different and powerful backpropagation-free post-training paradigm that opens a new direction for LLM fine-tuning beyond current RL-based approaches. The source codes are provided at: https://github.com/VsonicV/es-fine-tuning-paper.

Xin Qiu, Yulu Gan, Conor F. Hayes, Qiyao Liang, Yinggan Xu, Roberto Dailey, Elliot Meyerson, Babak Hodjat, Risto Miikkulainen• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH 500
Accuracy69.9
391
Mathematical ReasoningGSM8K
Accuracy89.1
303
Mathematical ReasoningCountdown
Accuracy71
126
CodingMBPP
Accuracy77.2
95
Mathematical ReasoningOlyBench
Accuracy36.4
59
ChemistryUSPTO
Accuracy52.9
48
WritingROCStories
Accuracy68.1
48
Showing 7 of 7 rows

Other info

Follow for update