EGAM: Extended Graph Attention Model for Solving Routing Problems
About
Neural combinatorial optimization (NCO) solvers, implemented with graph neural networks (GNNs), have introduced new approaches for solving routing problems. Trained with reinforcement learning (RL), the state-of-the-art graph attention model (GAM) achieves near-optimal solutions without requiring expert knowledge or labeled data. In this work, we generalize the existing graph attention mechanism and propose the extended graph attention model (EGAM). Our model utilizes multi-head dot-product attention to update both node and edge embeddings, addressing the limitations of the conventional GAM, which considers only node features. We employ an autoregressive encoder-decoder architecture and train it with policy gradient algorithms that incorporate a specially designed baseline. Experiments show that EGAM matches or outperforms existing methods across various routing problems. Notably, the proposed model demonstrates exceptional performance on highly constrained problems, highlighting its efficiency in handling complex graph structures.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Traveling Salesman Problem | TSP 10,000 randomly generated instances (test) | Cost5.7 | 20 | |
| Capacitated Vehicle Routing Problem | CVRP 10,000 randomly generated instances (test) | Cost10.48 | 13 | |
| Traveling Salesman Problem with Time Windows (TSPTW) | Randomly generated instances (test) | Cost13.42 | 12 | |
| Traveling Salesman Problem with Delivery and Locality (TSPDL) | Randomly generated instances (test) | Cost11.19 | 11 | |
| Vehicle Routing Problem with Time Windows | VRPTW 10,000 instances (test) | Cost18.83 | 11 | |
| Prize-Collecting Traveling Salesman Problem | PCTSP 10,000 randomly generated instances (test) | Cost4.48 | 11 |