Graph Neural Networks are Heuristics
About
We demonstrate that a single training trajectory can transform a graph neural network into an unsupervised heuristic for combinatorial optimization. Focusing on the Travelling Salesman Problem, we show that encoding global structural constraints as an inductive bias enables a non-autoregressive model to generate solutions via direct forward passes, without search, supervision, or sequential decision-making. At inference time, dropout and snapshot ensembling allow a single model to act as an implicit ensemble, reducing optimality gaps through increased solution diversity. Our results establish that graph neural networks do not require supervised training nor explicit search to be effective. Instead, they can internalize global combinatorial structure and function as strong, learned heuristics. This reframes the role of learning in combinatorial optimization: from augmenting classical algorithms to directly instantiating new heuristics.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Traveling Salesman Problem | Euclidean TSP n=500 Uniform distribution in unit square (test) | Tour Length18.47 | 14 | |
| Traveling Salesman Problem | Euclidean TSP n=200 Uniform distribution in unit square (test) | Tour Length11.55 | 14 | |
| Traveling Salesman Problem | Euclidean TSP n=100 Uniform distribution in unit square (test) | Tour Length8.09 | 14 |