Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Smart "Predict, then Optimize"

About

Many real-world analytics problems involve two significant challenges: prediction and optimization. Due to the typically complex nature of each challenge, the standard paradigm is predict-then-optimize. By and large, machine learning tools are intended to minimize prediction error and do not account for how the predictions will be used in the downstream optimization problem. In contrast, we propose a new and very general framework, called Smart "Predict, then Optimize" (SPO), which directly leverages the optimization problem structure, i.e., its objective and constraints, for designing better prediction models. A key component of our framework is the SPO loss function which measures the decision error induced by a prediction. Training a prediction model with respect to the SPO loss is computationally challenging, and thus we derive, using duality theory, a convex surrogate loss function which we call the SPO+ loss. Most importantly, we prove that the SPO+ loss is statistically consistent with respect to the SPO loss under mild conditions. Our SPO+ loss function can tractably handle any polyhedral, convex, or even mixed-integer optimization problem with a linear objective. Numerical experiments on shortest path and portfolio optimization problems show that the SPO framework can lead to significant improvement under the predict-then-optimize paradigm, in particular when the prediction model being trained is misspecified. We find that linear models trained using SPO+ loss tend to dominate random forest algorithms, even when the ground truth is highly nonlinear.

Adam N. Elmachtoub, Paul Grigas• 2017

Related benchmarks

TaskDatasetResultRank
Portfolio OptimizationMultivariate Return Series clean data (evaluation)
MSE0.0245
35
Portfolio OptimizationNoisy Portfolio Return Series
MSE0.0224
35
Knapsack ProblemKnapsack Degree 2, 4, 6, 8 (test)
Average Relative Regret16.58
32
Shortest PathShortest Path Degree 2, 4, 6, 8 (test)
Average Relative Regret3.23
32
Portfolio OptimizationPortfolio Degree 2, 4, 6, 8 (test)
Average Relative Regret6.92
32
Minimum Cost Vertex CoverPDH Real-life (test)
Mean Regret6.58
20
Minimum Cost Vertex CoverPOLSKA Artificial (test)
Mean Regret116.7
20
Minimum Cost Vertex CoverPOLSKA Real-life (test)
Mean Regret2.78
20
Minimum Cost Vertex CoverPDH Artificial (test)
Mean Regret69.46
20
Set MatchingSet Matching (SM3) (test)
Regret (%)88.02
10
Showing 10 of 28 rows

Other info

Follow for update