Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

AlphaLab: Autonomous Multi-Agent Research Across Optimization Domains with Frontier LLMs

About

We present AlphaLab, an autonomous research harness that leverages frontier LLM agentic capabilities to automate the full experimental cycle in quantitative, computation-intensive domains. Given only a dataset and a natural-language objective, AlphaLab proceeds through three phases without human intervention: (1) it adapts to the domain and explores the data, writing analysis code and producing a research report; (2) it constructs and adversarially validates its own evaluation framework; and (3) it runs large-scale GPU experiments via a Strategist/Worker loop, accumulating domain knowledge in a persistent playbook that functions as a form of online prompt optimization. All domain-specific behavior is factored into adapters generated by the model itself, so the same pipeline handles qualitatively different tasks without modification. We evaluate AlphaLab with two frontier LLMs (GPT-5.2 and Claude Opus 4.6) on three domains: CUDA kernel optimization, where it writes GPU kernels that run 4.4x faster than torch.compile on average (up to 91x); LLM pretraining, where the full system achieves 22% lower validation loss than a single-shot baseline using the same model; and traffic forecasting, where it beats standard baselines by 23-25% after researching and implementing published model families from the literature. The two models discover qualitatively different solutions in every domain (neither dominates uniformly), suggesting that multi-model campaigns provide complementary search coverage. We additionally report results on financial time series forecasting in the appendix, and release all code at https://brendanhogan.github.io/alphalab-paper/.

Brendan R. Hogan, Xiwen Chen, James T. Wilson, Kashif Rasul, Adel Boyarsky, Thomas Kamei, Anderson Schneider, Yuriy Nevmyvaka• 2026

Related benchmarks

TaskDatasetResultRank
Traffic ForecastingTraffic forecasting (test)
RMSE0.0214
23
Time Series ForecastingTraffic
RMSE0.0214
20
Financial ForecastingExchange-rate 5 rolling-origin windows GPT-5.2 (test)
Sharpe Ratio4.214
10
CUDA Kernel GenerationKernelBench Level 1 (single-op)
fast1 Performance (Native)89
5
CUDA Kernel GenerationKernelBench fusion Level 2
Fast1 (Natural)88
5
LLM PretrainingPleIAs SYNTH (val)
BPB (Validation)0.7578
5
LLM Pretraining500-shard corpus (val)
Validation BPB0.758
3
CUDA Kernel OptimizationCUDA kernels 66-task overlap
Speedup5.14
2
Showing 8 of 8 rows

Other info

Follow for update