Deep Policy Dynamic Programming for Vehicle Routing Problems
About
Routing problems are a class of combinatorial problems with many practical applications. Recently, end-to-end deep learning methods have been proposed to learn approximate solution heuristics for such problems. In contrast, classical dynamic programming (DP) algorithms guarantee optimal solutions, but scale badly with the problem size. We propose Deep Policy Dynamic Programming (DPDP), which aims to combine the strengths of learned neural heuristics with those of DP algorithms. DPDP prioritizes and restricts the DP state space using a policy derived from a deep neural network, which is trained to predict edges from example solutions. We evaluate our framework on the travelling salesman problem (TSP), the vehicle routing problem (VRP) and TSP with time windows (TSPTW) and show that the neural policy improves the performance of (restricted) DP algorithms, making them competitive to strong alternatives such as LKH, while also outperforming most other 'neural approaches' for solving TSPs, VRPs and TSPTWs with 100 nodes.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Traveling Salesman Problem (TSP) | TSP n=100 10K instances (test) | Objective Value7.765 | 52 | |
| Capacitated Vehicle Routing Problem | CVRP N=100 10,000 instances (test) | Objective Value15.69 | 28 | |
| Capacitated Vehicle Routing Problem | CVRP N=100 (test 10k inst.) | Optimality Gap0.41 | 22 | |
| Capacitated Vehicle Routing Problem | CVRP n=100 (10k instances) | Optimality Gap0.4 | 21 | |
| Capacitated Vehicle Routing Problem (CVRP) | CVRP n=150 1K instances (Generalization) | Objective Value19.312 | 18 | |
| Capacitated Vehicle Routing Problem | CVRP n=150 1k instances | Objective Value19.31 | 17 | |
| Capacitated Vehicle Routing Problem | CVRP n=125 (1k instances) | Objective Value17.51 | 16 | |
| Capacitated Vehicle Routing Problem (CVRP) | CVRP n=200 Generalization 1K instances | Objective Value22.263 | 12 | |
| Traveling Salesman Problem | TSP n=100 10k instances Jumanji (Inference) | Optimality Gap0.004 | 9 | |
| Traveling Salesman Problem | TSP 0-shot n=150, 1k instances Jumanji | Objective Value9.434 | 8 |