Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Penalty Approach for Differentiation Through Black-Box Quadratic Programming Solvers

About

Differentiating through the solution of a quadratic program (QP) is a central problem in differentiable optimization. Most existing approaches differentiate through the Karush--Kuhn--Tucker (KKT) system, but their computational cost and numerical robustness can degrade at scale. To address these limitations, we propose dXPP, a penalty-based differentiation framework that decouples QP solving from differentiation. In the solving step (forward pass), dXPP is solver-agnostic and can leverage any black-box QP solver. In the differentiation step (backward pass), we map the solution to a smooth approximate penalty problem and implicitly differentiate through it, requiring only the solution of a much smaller linear system in the primal variables. This approach bypasses the difficulties inherent in explicit KKT differentiation and significantly improves computational efficiency and robustness. We evaluate dXPP on various tasks, including randomly generated QPs, large-scale sparse projection problems, and a real-world multi-period portfolio optimization task. Empirical results demonstrate that dXPP is competitive with KKT-based differentiation methods and achieves substantial speedups on large-scale problems.

Yuxuan Linghu, Zhiyuan Liu, Qi Deng• 2026

Related benchmarks

TaskDatasetResultRank
Projection onto chainsProjection onto chains
Backward Latency (ms)1.06
29
Projection onto the probability simplexProbability Simplex
Backward Time (ms)0.72
26
Portfolio OptimizationMulti-period Portfolio Optimization
Backward Latency (ms)1.71
25
Sudoku SolvingSudoku
Average per-epoch runtime (ms)12.15
9
Showing 4 of 4 rows

Other info

Follow for update