Helix: Evolutionary Reinforcement Learning for Open-Ended Scientific Problem Solving
About
Large language models (LLMs) with reasoning abilities have demonstrated growing promise for tackling complex scientific problems. Yet such tasks are inherently domain-specific, unbounded and open-ended, demanding exploration across vast and flexible solution spaces. Existing approaches, whether purely learning-based or reliant on carefully designed workflows, often suffer from limited exploration efficiency and poor generalization. To overcome these challenges, we present HELIX -- a Hierarchical Evolutionary reinforcement Learning framework with In-context eXperiences. HELIX introduces two key novelties: (i) a diverse yet high-quality pool of candidate solutions that broadens exploration through in-context learning, and (ii) reinforcement learning for iterative policy refinement that progressively elevates solution quality. This synergy enables the discovery of more advanced solutions. On the circle packing task, HELIX achieves state-of-the-art result with a sum of radii of 2.63598308 using only a 14B model. Across standard machine learning benchmarks, HELIX further surpasses GPT-4o with a carefully engineered pipeline, delivering an average F1 improvement of 5.95 points on the Adult and Bank Marketing datasets.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Machine Learning | bank-marketing | F1 Score80.65 | 15 | |
| Mathematical Optimization | Eggholder function | -- | 8 | |
| Circle packing | Packing in Unit Square | Sum of Radii2.636 | 7 | |
| Circle packing | Packing in Unit Disk | Sum of Radii4.664 | 7 | |
| Machine Learning | Adult Income | F1 Score82.07 | 7 | |
| Machine Learning | Boston Housing | RMSE1.747 | 7 | |
| Physics Simulation | Inductor | Simulation Metric9.609 | 7 | |
| Symbolic Regression | Biology | Error2.98e-8 | 7 | |
| Function Minimization | Keanes Bump 10d | Objective Value Score100 | 7 | |
| Symbolic Regression | Physics | NMSE Average2.76e-5 | 7 |