Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fast T2T: Optimization Consistency Speeds Up Diffusion-Based Training-to-Testing Solving for Combinatorial Optimization

About

Diffusion models have recently advanced Combinatorial Optimization (CO) as a powerful backbone for neural solvers. However, their iterative sampling process requiring denoising across multiple noise levels incurs substantial overhead. We propose to learn direct mappings from different noise levels to the optimal solution for a given instance, facilitating high-quality generation with minimal shots. This is achieved through an optimization consistency training protocol, which, for a given instance, minimizes the difference among samples originating from varying generative trajectories and time steps relative to the optimal solution. The proposed model enables fast single-step solution generation while retaining the option of multi-step sampling to trade for sampling quality, which offers a more effective and efficient alternative backbone for neural solvers. In addition, within the training-to-testing (T2T) framework, to bridge the gap between training on historical instances and solving new instances, we introduce a novel consistency-based gradient search scheme during the test stage, enabling more effective exploration of the solution space learned during training. It is achieved by updating the latent solution probabilities under objective gradient guidance during the alternation of noise injection and denoising steps. We refer to this model as Fast T2T. Extensive experiments on two popular tasks, the Traveling Salesman Problem (TSP) and Maximal Independent Set (MIS), demonstrate the superiority of Fast T2T regarding both solution quality and efficiency, even outperforming LKH given limited time budgets. Notably, Fast T2T with merely one-step generation and one-step gradient search can mostly outperform the SOTA diffusion-based counterparts that require hundreds of steps, while achieving tens of times speedup.

Yang Li, Jinpei Guo, Runzhong Wang, Hongyuan Zha, Junchi Yan• 2025

Related benchmarks

TaskDatasetResultRank
Traveling Salesman ProblemTSP-500 (test)
Gap0.21
85
Traveling Salesman ProblemTSP50
Optimality Gap0.01
58
Traveling Salesman ProblemTSP-100
Optimality Drop0.01
53
Maximum Independent SetER [700-800]
Solution Size41.73
48
Traveling Salesperson ProblemTSP-100
Solution Length7.76
42
Traveling Salesman ProblemTSP-1000 (test)
Optimality Gap0.42
36
Traveling Salesman ProblemTSP-500
Solution Length16.66
32
Traveling Salesperson ProblemTSP-1k
Solution Length23.35
31
Maximum Independent SetSATLIB
MIS Size425.2
23
Traveling Salesman ProblemTSP 1000
Objective Value24.88
20
Showing 10 of 23 rows

Other info

Code

Follow for update