Principle-Evolvable Scientific Discovery via Uncertainty Minimization
About
Large Language Model (LLM)-based scientific agents have accelerated scientific discovery, yet they often suffer from significant inefficiencies due to adherence to fixed initial priors. Existing approaches predominantly operate within a static hypothesis space, which restricts the discovery of novel phenomena, resulting in computational waste when baseline theories fail. To address this, we propose shifting the focus from searching hypotheses to evolving the underlying scientific principles. We present PiEvo, a principle-evolvable framework that treats scientific discovery as Bayesian optimization over an expanding principle space. By integrating Information-Directed Hypothesis Selection via Gaussian Process and an anomaly-driven augmentation mechanism, PiEvo enables agents to autonomously refine their theoretical worldview. Evaluation across four benchmarks demonstrates that PiEvo (1) achieves an average solution quality of up to 90.81%~93.15%, representing a 29.7%~31.1% improvement over the state-of-the-art, (2) attains an 83.3% speedup in convergence step via significantly reduced sample complexity by optimizing the compact principle space, and (3) maintains robust performance across diverse scientific domains and LLM backbones.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Scientific Discovery | MBO | Solution Quality153.5 | 14 | |
| Scientific Discovery | NHO | Solution Quality (SQ)0.9636 | 14 | |
| Scientific Discovery | Spo | SQ (%)37.85 | 14 | |
| Scientific Discovery | TMC | Solution Quality93.25 | 14 | |
| Scientific Discovery | Average MBO, NHO, SPO, TMC | Avg APD49.7 | 14 | |
| Nanophotonic Helix Optimization | NHO (test) | SQ96.36 | 5 |