Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

OptiML: An End-to-End Framework for Program Synthesis and CUDA Kernel Optimization

About

Generating high-performance CUDA kernels remains challenging due to the need to navigate a combinatorial space of low-level transformations under noisy and expensive hardware feedback. Although large language models can synthesize functionally correct CUDA code, achieving competitive performance requires systematic exploration and verification of optimization choices. We present OptiML, an end-to-end framework that maps either natural-language intent or input CUDA code to performance-optimized CUDA kernels by formulating kernel optimization as search under verification. OptiML consists of two decoupled stages. When the input is natural language, a Mixture-of-Thoughts generator (OptiML-G) acts as a proposal policy over kernel implementation strategies, producing an initial executable program. A search-based optimizer (OptiML-X) then refines either synthesized or user-provided kernels using Monte Carlo Tree Search over LLM-driven edits, guided by a hardware-aware reward derived from profiler feedback. Each candidate transformation is compiled, verified, and profiled with Nsight Compute, and evaluated by a composite objective that combines runtime with hardware bottleneck proxies and guardrails against regressions. We evaluate OptiML in both synthesis-and-optimize and optimization-only settings on a diverse suite of CUDA kernels. Results show that OptiML consistently discovers verified performance improvements over strong LLM baselines and produces interpretable optimization trajectories grounded in profiler evidence.

Arijit Bhattacharjee, Heng Ping, Son Vu Le, Paul Bogdan, Nesreen K. Ahmed, Ali Jannesari• 2026

Related benchmarks

TaskDatasetResultRank
CUDA Kernel OptimizationParEval 5-task subset
Pass@185
9
Dot ProductCUDA-LLM task suite
Execution Time (ms)3.99
9
Matrix CopyCUDA-LLM task suite
Execution Time5.12
9
Matrix MultiplicationCUDA-LLM kernels task suite 1.0 (test)
Execution Time4.4
9
Matrix MultiplicationCUDA-LLM kernels task suite Matrix Multiplication
Execution Time (s)4.4
9
Matrix TransposeCUDA-LLM task suite
Time5.2
9
Mean Square ErrorCUDA-LLM kernels task suite 1.0 (test)
Execution Time4.79
9
ReductionCUDA-LLM task suite
Time5.75
9
ReLU Activation FunctionCUDA-LLM task suite
Time4.49
9
Reverse ArrayCUDA-LLM task suite
Execution Time4.07
9
Showing 10 of 23 rows

Other info

Follow for update