Contrastive Losses and Solution Caching for Predict-and-Optimize
About
Many decision-making processes involve solving a combinatorial optimization problem with uncertain input that can be estimated from historic data. Recently, problems in this class have been successfully addressed via end-to-end learning approaches, which rely on solving one optimization problem for each training instance at every epoch. In this context, we provide two distinct contributions. First, we use a Noise Contrastive approach to motivate a family of surrogate loss functions, based on viewing non-optimal solutions as negative examples. Second, we address a major bottleneck of all predict-and-optimize approaches, i.e. the need to frequently recompute optimal solutions at training time. This is done via a solver-agnostic solution caching scheme, and by replacing optimization calls with a lookup in the solution cache. The method is formally based on an inner approximation of the feasible space and, combined with a cache lookup strategy, provides a controllable trade-off between training time and accuracy of the loss approximation. We empirically show that even a very slow growth rate is enough to match the quality of state-of-the-art methods, at a fraction of the computational cost.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Shortest Path | Shortest Path Degree 2, 4, 6, 8 (test) | Average Relative Regret9.59 | 32 | |
| Knapsack Problem | Knapsack Degree 2, 4, 6, 8 (test) | Average Relative Regret20.07 | 32 | |
| Portfolio Optimization | Portfolio Degree 2, 4, 6, 8 (test) | Average Relative Regret7.81 | 32 | |
| Resource Allocation | COVID Resource Allocation (test) | Average Relative Regret16.48 | 8 | |
| Energy Scheduling | Energy (test) | Average Relative Regret1.59 | 8 |