Learning 2-opt Heuristics for the Traveling Salesman Problem via Deep Reinforcement Learning
About
Recent works using deep learning to solve the Traveling Salesman Problem (TSP) have focused on learning construction heuristics. Such approaches find TSP solutions of good quality but require additional procedures such as beam search and sampling to improve solutions and achieve state-of-the-art performance. However, few studies have focused on improvement heuristics, where a given solution is improved until reaching a near-optimal one. In this work, we propose to learn a local search heuristic based on 2-opt operators via deep reinforcement learning. We propose a policy gradient algorithm to learn a stochastic policy that selects 2-opt operations given a current solution. Moreover, we introduce a policy neural network that leverages a pointing attention mechanism, which unlike previous works, can be easily extended to more general k-opt moves. Our results show that the learned policies can improve even over random initial solutions and approach near-optimal solutions at a faster rate than previous state-of-the-art deep learning methods.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Traveling Salesman Problem (TSP) | TSP n=100 10K instances (test) | Objective Value7.79 | 52 | |
| Traveling Salesman Problem | TSP N=20 10,000 instances (test) | Objective Value3.83 | 16 | |
| Traveling Salesman Problem | TSP N=50 10,000 instances (test) | Objective Value5.7 | 16 | |
| Traveling Salesperson Problem | TSPLIB Real-world instances 1.0 | Optimality Gap (%)0.0023 | 12 | |
| Traveling Salesperson Problem | TSP n=100 (train) | Objective Value7.87 | 9 | |
| Capacitated Vehicle Routing Problem | CVRP n=100 (train) | Objective Value16.03 | 7 | |
| Capacitated Vehicle Routing Problem | CVRP n=100 Training distribution Uniform (test) | Objective Value16.03 | 7 |